Test Report: KVM_Linux_crio 19341

                    
                      9b97c7bfbeafe185e6db2e35612f0670b350ca0e:2024-07-29:35548
                    
                

Test fail (34/320)

Order failed test Duration
43 TestAddons/parallel/Ingress 154.26
45 TestAddons/parallel/MetricsServer 313.63
54 TestAddons/StoppedEnableDisable 154.37
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 684.22
178 TestMultiControlPlane/serial/DeleteSecondaryNode 2.93
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.8
180 TestMultiControlPlane/serial/StopCluster 174.09
181 TestMultiControlPlane/serial/RestartCluster 459.82
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 3.05
183 TestMultiControlPlane/serial/AddSecondaryNode 85.94
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 3.37
240 TestMultiNode/serial/RestartKeepsNodes 327.03
242 TestMultiNode/serial/StopMultiNode 141.38
249 TestPreload 312.75
257 TestKubernetesUpgrade 445.92
292 TestPause/serial/SecondStartNoReconfiguration 74.98
328 TestStartStop/group/old-k8s-version/serial/FirstStart 284.39
350 TestStartStop/group/embed-certs/serial/Stop 139.2
351 TestStartStop/group/no-preload/serial/Stop 138.97
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.13
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 95.36
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/old-k8s-version/serial/SecondStart 735.7
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.25
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.59
368 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.66
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.47
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 384.81
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 384.92
372 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 415.13
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 141.51
x
+
TestAddons/parallel/Ingress (154.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-631322 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-631322 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-631322 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5920ddfd-ff15-402c-bf7c-8b1f9591b455] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5920ddfd-ff15-402c-bf7c-8b1f9591b455] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005055639s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-631322 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.46189205s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-631322 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.55
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-631322 addons disable ingress-dns --alsologtostderr -v=1: (1.044570014s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-631322 addons disable ingress --alsologtostderr -v=1: (7.672827034s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-631322 -n addons-631322
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-631322 logs -n 25: (1.241109875s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-320141                                                                     | download-only-320141 | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC | 29 Jul 24 12:03 UTC |
	| delete  | -p download-only-679044                                                                     | download-only-679044 | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC | 29 Jul 24 12:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-468907 | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC |                     |
	|         | binary-mirror-468907                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44345                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-468907                                                                     | binary-mirror-468907 | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC | 29 Jul 24 12:03 UTC |
	| addons  | disable dashboard -p                                                                        | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC |                     |
	|         | addons-631322                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC |                     |
	|         | addons-631322                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-631322 --wait=true                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC | 29 Jul 24 12:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:08 UTC |
	|         | addons-631322                                                                               |                      |         |         |                     |                     |
	| ip      | addons-631322 ip                                                                            | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-631322 ssh cat                                                                       | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | /opt/local-path-provisioner/pvc-3da48a95-fd4c-467b-9806-616d63c75cdf_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | -p addons-631322                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-631322 ssh curl -s                                                                   | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-631322 addons                                                                        | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-631322 addons                                                                        | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | addons-631322                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | -p addons-631322                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:09 UTC | 29 Jul 24 12:09 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-631322 ip                                                                            | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:10 UTC | 29 Jul 24 12:10 UTC |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:10 UTC | 29 Jul 24 12:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:10 UTC | 29 Jul 24 12:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:03:38
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:03:38.785845  241491 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:03:38.786105  241491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:03:38.786114  241491 out.go:304] Setting ErrFile to fd 2...
	I0729 12:03:38.786118  241491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:03:38.786319  241491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:03:38.786923  241491 out.go:298] Setting JSON to false
	I0729 12:03:38.787733  241491 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6362,"bootTime":1722248257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:03:38.787789  241491 start.go:139] virtualization: kvm guest
	I0729 12:03:38.789862  241491 out.go:177] * [addons-631322] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:03:38.791110  241491 notify.go:220] Checking for updates...
	I0729 12:03:38.791119  241491 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:03:38.792482  241491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:03:38.793892  241491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:03:38.795349  241491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:03:38.796544  241491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:03:38.797869  241491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:03:38.799290  241491 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:03:38.830714  241491 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 12:03:38.832014  241491 start.go:297] selected driver: kvm2
	I0729 12:03:38.832025  241491 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:03:38.832039  241491 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:03:38.832680  241491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:03:38.832773  241491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:03:38.847108  241491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:03:38.847152  241491 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:03:38.847357  241491 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:03:38.847381  241491 cni.go:84] Creating CNI manager for ""
	I0729 12:03:38.847390  241491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:03:38.847402  241491 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:03:38.847448  241491 start.go:340] cluster config:
	{Name:addons-631322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-631322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:03:38.847531  241491 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:03:38.849258  241491 out.go:177] * Starting "addons-631322" primary control-plane node in "addons-631322" cluster
	I0729 12:03:38.850606  241491 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:03:38.850635  241491 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:03:38.850663  241491 cache.go:56] Caching tarball of preloaded images
	I0729 12:03:38.850736  241491 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:03:38.850746  241491 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:03:38.851030  241491 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/config.json ...
	I0729 12:03:38.851053  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/config.json: {Name:mk47b09464316e77ac954e90709ba511d6f1c023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:03:38.851174  241491 start.go:360] acquireMachinesLock for addons-631322: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:03:38.851215  241491 start.go:364] duration metric: took 29.949µs to acquireMachinesLock for "addons-631322"
	I0729 12:03:38.851231  241491 start.go:93] Provisioning new machine with config: &{Name:addons-631322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-631322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:03:38.851300  241491 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 12:03:38.852869  241491 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 12:03:38.852995  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:03:38.853029  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:03:38.867004  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38575
	I0729 12:03:38.867437  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:03:38.867992  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:03:38.868018  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:03:38.868422  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:03:38.868606  241491 main.go:141] libmachine: (addons-631322) Calling .GetMachineName
	I0729 12:03:38.868752  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:03:38.868899  241491 start.go:159] libmachine.API.Create for "addons-631322" (driver="kvm2")
	I0729 12:03:38.868926  241491 client.go:168] LocalClient.Create starting
	I0729 12:03:38.868959  241491 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem
	I0729 12:03:39.066691  241491 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem
	I0729 12:03:39.134077  241491 main.go:141] libmachine: Running pre-create checks...
	I0729 12:03:39.134101  241491 main.go:141] libmachine: (addons-631322) Calling .PreCreateCheck
	I0729 12:03:39.134642  241491 main.go:141] libmachine: (addons-631322) Calling .GetConfigRaw
	I0729 12:03:39.135136  241491 main.go:141] libmachine: Creating machine...
	I0729 12:03:39.135151  241491 main.go:141] libmachine: (addons-631322) Calling .Create
	I0729 12:03:39.135330  241491 main.go:141] libmachine: (addons-631322) Creating KVM machine...
	I0729 12:03:39.136507  241491 main.go:141] libmachine: (addons-631322) DBG | found existing default KVM network
	I0729 12:03:39.137314  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:39.137181  241513 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0729 12:03:39.137372  241491 main.go:141] libmachine: (addons-631322) DBG | created network xml: 
	I0729 12:03:39.137397  241491 main.go:141] libmachine: (addons-631322) DBG | <network>
	I0729 12:03:39.137410  241491 main.go:141] libmachine: (addons-631322) DBG |   <name>mk-addons-631322</name>
	I0729 12:03:39.137421  241491 main.go:141] libmachine: (addons-631322) DBG |   <dns enable='no'/>
	I0729 12:03:39.137430  241491 main.go:141] libmachine: (addons-631322) DBG |   
	I0729 12:03:39.137438  241491 main.go:141] libmachine: (addons-631322) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 12:03:39.137477  241491 main.go:141] libmachine: (addons-631322) DBG |     <dhcp>
	I0729 12:03:39.137497  241491 main.go:141] libmachine: (addons-631322) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 12:03:39.137504  241491 main.go:141] libmachine: (addons-631322) DBG |     </dhcp>
	I0729 12:03:39.137511  241491 main.go:141] libmachine: (addons-631322) DBG |   </ip>
	I0729 12:03:39.137595  241491 main.go:141] libmachine: (addons-631322) DBG |   
	I0729 12:03:39.137632  241491 main.go:141] libmachine: (addons-631322) DBG | </network>
	I0729 12:03:39.137650  241491 main.go:141] libmachine: (addons-631322) DBG | 
	I0729 12:03:39.142659  241491 main.go:141] libmachine: (addons-631322) DBG | trying to create private KVM network mk-addons-631322 192.168.39.0/24...
	I0729 12:03:39.205402  241491 main.go:141] libmachine: (addons-631322) DBG | private KVM network mk-addons-631322 192.168.39.0/24 created
	I0729 12:03:39.205453  241491 main.go:141] libmachine: (addons-631322) Setting up store path in /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322 ...
	I0729 12:03:39.205488  241491 main.go:141] libmachine: (addons-631322) Building disk image from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 12:03:39.205505  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:39.205381  241513 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:03:39.205541  241491 main.go:141] libmachine: (addons-631322) Downloading /home/jenkins/minikube-integration/19341-233093/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 12:03:39.482272  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:39.482137  241513 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa...
	I0729 12:03:39.587871  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:39.587680  241513 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/addons-631322.rawdisk...
	I0729 12:03:39.587912  241491 main.go:141] libmachine: (addons-631322) DBG | Writing magic tar header
	I0729 12:03:39.587927  241491 main.go:141] libmachine: (addons-631322) DBG | Writing SSH key tar header
	I0729 12:03:39.587939  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:39.587858  241513 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322 ...
	I0729 12:03:39.588053  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322
	I0729 12:03:39.588078  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines
	I0729 12:03:39.588087  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322 (perms=drwx------)
	I0729 12:03:39.588103  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines (perms=drwxr-xr-x)
	I0729 12:03:39.588114  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:03:39.588125  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube (perms=drwxr-xr-x)
	I0729 12:03:39.588135  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093 (perms=drwxrwxr-x)
	I0729 12:03:39.588144  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 12:03:39.588152  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 12:03:39.588161  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093
	I0729 12:03:39.588166  241491 main.go:141] libmachine: (addons-631322) Creating domain...
	I0729 12:03:39.588181  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 12:03:39.588193  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins
	I0729 12:03:39.588228  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home
	I0729 12:03:39.588255  241491 main.go:141] libmachine: (addons-631322) DBG | Skipping /home - not owner
	I0729 12:03:39.589216  241491 main.go:141] libmachine: (addons-631322) define libvirt domain using xml: 
	I0729 12:03:39.589237  241491 main.go:141] libmachine: (addons-631322) <domain type='kvm'>
	I0729 12:03:39.589245  241491 main.go:141] libmachine: (addons-631322)   <name>addons-631322</name>
	I0729 12:03:39.589252  241491 main.go:141] libmachine: (addons-631322)   <memory unit='MiB'>4000</memory>
	I0729 12:03:39.589261  241491 main.go:141] libmachine: (addons-631322)   <vcpu>2</vcpu>
	I0729 12:03:39.589268  241491 main.go:141] libmachine: (addons-631322)   <features>
	I0729 12:03:39.589280  241491 main.go:141] libmachine: (addons-631322)     <acpi/>
	I0729 12:03:39.589287  241491 main.go:141] libmachine: (addons-631322)     <apic/>
	I0729 12:03:39.589295  241491 main.go:141] libmachine: (addons-631322)     <pae/>
	I0729 12:03:39.589305  241491 main.go:141] libmachine: (addons-631322)     
	I0729 12:03:39.589316  241491 main.go:141] libmachine: (addons-631322)   </features>
	I0729 12:03:39.589322  241491 main.go:141] libmachine: (addons-631322)   <cpu mode='host-passthrough'>
	I0729 12:03:39.589327  241491 main.go:141] libmachine: (addons-631322)   
	I0729 12:03:39.589340  241491 main.go:141] libmachine: (addons-631322)   </cpu>
	I0729 12:03:39.589345  241491 main.go:141] libmachine: (addons-631322)   <os>
	I0729 12:03:39.589350  241491 main.go:141] libmachine: (addons-631322)     <type>hvm</type>
	I0729 12:03:39.589355  241491 main.go:141] libmachine: (addons-631322)     <boot dev='cdrom'/>
	I0729 12:03:39.589359  241491 main.go:141] libmachine: (addons-631322)     <boot dev='hd'/>
	I0729 12:03:39.589365  241491 main.go:141] libmachine: (addons-631322)     <bootmenu enable='no'/>
	I0729 12:03:39.589368  241491 main.go:141] libmachine: (addons-631322)   </os>
	I0729 12:03:39.589373  241491 main.go:141] libmachine: (addons-631322)   <devices>
	I0729 12:03:39.589380  241491 main.go:141] libmachine: (addons-631322)     <disk type='file' device='cdrom'>
	I0729 12:03:39.589398  241491 main.go:141] libmachine: (addons-631322)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/boot2docker.iso'/>
	I0729 12:03:39.589409  241491 main.go:141] libmachine: (addons-631322)       <target dev='hdc' bus='scsi'/>
	I0729 12:03:39.589414  241491 main.go:141] libmachine: (addons-631322)       <readonly/>
	I0729 12:03:39.589421  241491 main.go:141] libmachine: (addons-631322)     </disk>
	I0729 12:03:39.589427  241491 main.go:141] libmachine: (addons-631322)     <disk type='file' device='disk'>
	I0729 12:03:39.589436  241491 main.go:141] libmachine: (addons-631322)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 12:03:39.589490  241491 main.go:141] libmachine: (addons-631322)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/addons-631322.rawdisk'/>
	I0729 12:03:39.589513  241491 main.go:141] libmachine: (addons-631322)       <target dev='hda' bus='virtio'/>
	I0729 12:03:39.589524  241491 main.go:141] libmachine: (addons-631322)     </disk>
	I0729 12:03:39.589535  241491 main.go:141] libmachine: (addons-631322)     <interface type='network'>
	I0729 12:03:39.589548  241491 main.go:141] libmachine: (addons-631322)       <source network='mk-addons-631322'/>
	I0729 12:03:39.589558  241491 main.go:141] libmachine: (addons-631322)       <model type='virtio'/>
	I0729 12:03:39.589571  241491 main.go:141] libmachine: (addons-631322)     </interface>
	I0729 12:03:39.589598  241491 main.go:141] libmachine: (addons-631322)     <interface type='network'>
	I0729 12:03:39.589609  241491 main.go:141] libmachine: (addons-631322)       <source network='default'/>
	I0729 12:03:39.589620  241491 main.go:141] libmachine: (addons-631322)       <model type='virtio'/>
	I0729 12:03:39.589630  241491 main.go:141] libmachine: (addons-631322)     </interface>
	I0729 12:03:39.589640  241491 main.go:141] libmachine: (addons-631322)     <serial type='pty'>
	I0729 12:03:39.589651  241491 main.go:141] libmachine: (addons-631322)       <target port='0'/>
	I0729 12:03:39.589662  241491 main.go:141] libmachine: (addons-631322)     </serial>
	I0729 12:03:39.589674  241491 main.go:141] libmachine: (addons-631322)     <console type='pty'>
	I0729 12:03:39.589686  241491 main.go:141] libmachine: (addons-631322)       <target type='serial' port='0'/>
	I0729 12:03:39.589702  241491 main.go:141] libmachine: (addons-631322)     </console>
	I0729 12:03:39.589715  241491 main.go:141] libmachine: (addons-631322)     <rng model='virtio'>
	I0729 12:03:39.589726  241491 main.go:141] libmachine: (addons-631322)       <backend model='random'>/dev/random</backend>
	I0729 12:03:39.589735  241491 main.go:141] libmachine: (addons-631322)     </rng>
	I0729 12:03:39.589744  241491 main.go:141] libmachine: (addons-631322)     
	I0729 12:03:39.589753  241491 main.go:141] libmachine: (addons-631322)     
	I0729 12:03:39.589770  241491 main.go:141] libmachine: (addons-631322)   </devices>
	I0729 12:03:39.589781  241491 main.go:141] libmachine: (addons-631322) </domain>
	I0729 12:03:39.589801  241491 main.go:141] libmachine: (addons-631322) 
	I0729 12:03:39.595564  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:39:96:56 in network default
	I0729 12:03:39.596138  241491 main.go:141] libmachine: (addons-631322) Ensuring networks are active...
	I0729 12:03:39.596166  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:39.596676  241491 main.go:141] libmachine: (addons-631322) Ensuring network default is active
	I0729 12:03:39.596928  241491 main.go:141] libmachine: (addons-631322) Ensuring network mk-addons-631322 is active
	I0729 12:03:39.597339  241491 main.go:141] libmachine: (addons-631322) Getting domain xml...
	I0729 12:03:39.598062  241491 main.go:141] libmachine: (addons-631322) Creating domain...
	I0729 12:03:40.974631  241491 main.go:141] libmachine: (addons-631322) Waiting to get IP...
	I0729 12:03:40.975504  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:40.975882  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:40.975912  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:40.975871  241513 retry.go:31] will retry after 221.1026ms: waiting for machine to come up
	I0729 12:03:41.198470  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:41.198967  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:41.199001  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:41.198913  241513 retry.go:31] will retry after 390.326394ms: waiting for machine to come up
	I0729 12:03:41.590590  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:41.590998  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:41.591022  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:41.590969  241513 retry.go:31] will retry after 432.958907ms: waiting for machine to come up
	I0729 12:03:42.025602  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:42.026069  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:42.026099  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:42.026014  241513 retry.go:31] will retry after 601.724783ms: waiting for machine to come up
	I0729 12:03:42.629733  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:42.630146  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:42.630176  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:42.630084  241513 retry.go:31] will retry after 614.697445ms: waiting for machine to come up
	I0729 12:03:43.246453  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:43.246884  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:43.246913  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:43.246831  241513 retry.go:31] will retry after 675.840233ms: waiting for machine to come up
	I0729 12:03:43.924252  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:43.924621  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:43.924648  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:43.924583  241513 retry.go:31] will retry after 1.129870242s: waiting for machine to come up
	I0729 12:03:45.055815  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:45.056264  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:45.056290  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:45.056222  241513 retry.go:31] will retry after 1.407914366s: waiting for machine to come up
	I0729 12:03:46.465921  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:46.466270  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:46.466296  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:46.466222  241513 retry.go:31] will retry after 1.85953515s: waiting for machine to come up
	I0729 12:03:48.327095  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:48.327538  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:48.327564  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:48.327484  241513 retry.go:31] will retry after 1.811774102s: waiting for machine to come up
	I0729 12:03:50.140517  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:50.140992  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:50.141027  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:50.140947  241513 retry.go:31] will retry after 2.1623841s: waiting for machine to come up
	I0729 12:03:52.306212  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:52.306569  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:52.306594  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:52.306506  241513 retry.go:31] will retry after 2.203731396s: waiting for machine to come up
	I0729 12:03:54.511322  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:54.511719  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:54.511746  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:54.511708  241513 retry.go:31] will retry after 3.089723759s: waiting for machine to come up
	I0729 12:03:57.606029  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:57.606410  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:57.606429  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:57.606387  241513 retry.go:31] will retry after 5.382838108s: waiting for machine to come up
	I0729 12:04:02.990939  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:02.991324  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has current primary IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:02.991354  241491 main.go:141] libmachine: (addons-631322) Found IP for machine: 192.168.39.55
	I0729 12:04:02.991368  241491 main.go:141] libmachine: (addons-631322) Reserving static IP address...
	I0729 12:04:02.991651  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find host DHCP lease matching {name: "addons-631322", mac: "52:54:00:47:2e:02", ip: "192.168.39.55"} in network mk-addons-631322
	I0729 12:04:03.062182  241491 main.go:141] libmachine: (addons-631322) DBG | Getting to WaitForSSH function...
	I0729 12:04:03.062215  241491 main.go:141] libmachine: (addons-631322) Reserved static IP address: 192.168.39.55
	I0729 12:04:03.062228  241491 main.go:141] libmachine: (addons-631322) Waiting for SSH to be available...
	I0729 12:04:03.064609  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.065140  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.065182  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.065292  241491 main.go:141] libmachine: (addons-631322) DBG | Using SSH client type: external
	I0729 12:04:03.065326  241491 main.go:141] libmachine: (addons-631322) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa (-rw-------)
	I0729 12:04:03.065349  241491 main.go:141] libmachine: (addons-631322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 12:04:03.065372  241491 main.go:141] libmachine: (addons-631322) DBG | About to run SSH command:
	I0729 12:04:03.065383  241491 main.go:141] libmachine: (addons-631322) DBG | exit 0
	I0729 12:04:03.189041  241491 main.go:141] libmachine: (addons-631322) DBG | SSH cmd err, output: <nil>: 
	I0729 12:04:03.189375  241491 main.go:141] libmachine: (addons-631322) KVM machine creation complete!
	I0729 12:04:03.189606  241491 main.go:141] libmachine: (addons-631322) Calling .GetConfigRaw
	I0729 12:04:03.190160  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:03.190352  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:03.190497  241491 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 12:04:03.190511  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:03.191603  241491 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 12:04:03.191617  241491 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 12:04:03.191625  241491 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 12:04:03.191631  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.193453  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.193763  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.193784  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.193949  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.194122  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.194283  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.194414  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.194575  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:03.194767  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:03.194777  241491 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 12:04:03.295899  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:04:03.295932  241491 main.go:141] libmachine: Detecting the provisioner...
	I0729 12:04:03.295940  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.298340  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.298648  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.298679  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.298826  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.299010  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.299197  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.299334  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.299516  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:03.299672  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:03.299682  241491 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 12:04:03.405668  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 12:04:03.405789  241491 main.go:141] libmachine: found compatible host: buildroot
	I0729 12:04:03.405804  241491 main.go:141] libmachine: Provisioning with buildroot...
	I0729 12:04:03.405817  241491 main.go:141] libmachine: (addons-631322) Calling .GetMachineName
	I0729 12:04:03.406088  241491 buildroot.go:166] provisioning hostname "addons-631322"
	I0729 12:04:03.406113  241491 main.go:141] libmachine: (addons-631322) Calling .GetMachineName
	I0729 12:04:03.406328  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.408863  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.409159  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.409202  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.409357  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.409604  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.409772  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.410043  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.410231  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:03.410405  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:03.410417  241491 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-631322 && echo "addons-631322" | sudo tee /etc/hostname
	I0729 12:04:03.527902  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-631322
	
	I0729 12:04:03.527949  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.530512  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.530859  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.530887  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.531036  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.531235  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.531399  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.531522  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.531655  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:03.531846  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:03.531869  241491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-631322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-631322/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-631322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:04:03.646014  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:04:03.646054  241491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:04:03.646074  241491 buildroot.go:174] setting up certificates
	I0729 12:04:03.646086  241491 provision.go:84] configureAuth start
	I0729 12:04:03.646095  241491 main.go:141] libmachine: (addons-631322) Calling .GetMachineName
	I0729 12:04:03.646410  241491 main.go:141] libmachine: (addons-631322) Calling .GetIP
	I0729 12:04:03.648823  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.649195  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.649214  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.649407  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.651502  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.651815  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.651844  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.651974  241491 provision.go:143] copyHostCerts
	I0729 12:04:03.652071  241491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:04:03.652196  241491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:04:03.652264  241491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:04:03.652323  241491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.addons-631322 san=[127.0.0.1 192.168.39.55 addons-631322 localhost minikube]
	I0729 12:04:03.824070  241491 provision.go:177] copyRemoteCerts
	I0729 12:04:03.824140  241491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:04:03.824164  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.826738  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.827131  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.827165  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.827307  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.827502  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.827665  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.827797  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:03.910795  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 12:04:03.933959  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:04:03.962996  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:04:03.985364  241491 provision.go:87] duration metric: took 339.265549ms to configureAuth
	I0729 12:04:03.985391  241491 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:04:03.985605  241491 config.go:182] Loaded profile config "addons-631322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:04:03.985716  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.988212  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.988540  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.988575  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.988739  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.988961  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.989121  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.989246  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.989377  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:03.989541  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:03.989555  241491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:04:04.252863  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:04:04.252899  241491 main.go:141] libmachine: Checking connection to Docker...
	I0729 12:04:04.252907  241491 main.go:141] libmachine: (addons-631322) Calling .GetURL
	I0729 12:04:04.254059  241491 main.go:141] libmachine: (addons-631322) DBG | Using libvirt version 6000000
	I0729 12:04:04.255919  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.256329  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.256360  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.256470  241491 main.go:141] libmachine: Docker is up and running!
	I0729 12:04:04.256485  241491 main.go:141] libmachine: Reticulating splines...
	I0729 12:04:04.256494  241491 client.go:171] duration metric: took 25.387558324s to LocalClient.Create
	I0729 12:04:04.256514  241491 start.go:167] duration metric: took 25.387616353s to libmachine.API.Create "addons-631322"
	I0729 12:04:04.256523  241491 start.go:293] postStartSetup for "addons-631322" (driver="kvm2")
	I0729 12:04:04.256541  241491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:04:04.256568  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:04.256851  241491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:04:04.256881  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:04.258846  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.259171  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.259203  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.259320  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:04.259511  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:04.259664  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:04.259815  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:04.342865  241491 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:04:04.347630  241491 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:04:04.347654  241491 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:04:04.347728  241491 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:04:04.347756  241491 start.go:296] duration metric: took 91.220597ms for postStartSetup
	I0729 12:04:04.347805  241491 main.go:141] libmachine: (addons-631322) Calling .GetConfigRaw
	I0729 12:04:04.348373  241491 main.go:141] libmachine: (addons-631322) Calling .GetIP
	I0729 12:04:04.350735  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.351051  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.351078  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.351303  241491 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/config.json ...
	I0729 12:04:04.351463  241491 start.go:128] duration metric: took 25.500152223s to createHost
	I0729 12:04:04.351484  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:04.353661  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.353915  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.353938  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.354073  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:04.354272  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:04.354441  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:04.354618  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:04.354785  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:04.354984  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:04.354997  241491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:04:04.457234  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722254644.434549096
	
	I0729 12:04:04.457261  241491 fix.go:216] guest clock: 1722254644.434549096
	I0729 12:04:04.457274  241491 fix.go:229] Guest: 2024-07-29 12:04:04.434549096 +0000 UTC Remote: 2024-07-29 12:04:04.351473847 +0000 UTC m=+25.598919584 (delta=83.075249ms)
	I0729 12:04:04.457310  241491 fix.go:200] guest clock delta is within tolerance: 83.075249ms
	I0729 12:04:04.457316  241491 start.go:83] releasing machines lock for "addons-631322", held for 25.606092699s
	I0729 12:04:04.457346  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:04.457622  241491 main.go:141] libmachine: (addons-631322) Calling .GetIP
	I0729 12:04:04.459908  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.460232  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.460259  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.460379  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:04.460905  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:04.461129  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:04.461230  241491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:04:04.461281  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:04.461349  241491 ssh_runner.go:195] Run: cat /version.json
	I0729 12:04:04.461376  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:04.463461  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.463688  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.463782  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.463813  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.463969  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:04.463968  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.464001  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.464127  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:04.464188  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:04.464285  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:04.464348  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:04.464422  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:04.464474  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:04.464594  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:04.541639  241491 ssh_runner.go:195] Run: systemctl --version
	I0729 12:04:04.565723  241491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:04:04.725068  241491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:04:04.730930  241491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:04:04.731003  241491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:04:04.747140  241491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 12:04:04.747163  241491 start.go:495] detecting cgroup driver to use...
	I0729 12:04:04.747233  241491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:04:04.762268  241491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:04:04.775558  241491 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:04:04.775618  241491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:04:04.788740  241491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:04:04.801864  241491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:04:04.908099  241491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:04:05.070388  241491 docker.go:233] disabling docker service ...
	I0729 12:04:05.070472  241491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:04:05.084857  241491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:04:05.097567  241491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:04:05.218183  241491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:04:05.341114  241491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:04:05.355127  241491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:04:05.372766  241491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:04:05.372844  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.383119  241491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:04:05.383176  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.393788  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.404283  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.414624  241491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:04:05.425117  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.435228  241491 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.451683  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.461864  241491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:04:05.470791  241491 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 12:04:05.470839  241491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 12:04:05.483572  241491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:04:05.492603  241491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:04:05.610229  241491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:04:05.738899  241491 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:04:05.738996  241491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:04:05.743470  241491 start.go:563] Will wait 60s for crictl version
	I0729 12:04:05.743519  241491 ssh_runner.go:195] Run: which crictl
	I0729 12:04:05.746979  241491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:04:05.782099  241491 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:04:05.782203  241491 ssh_runner.go:195] Run: crio --version
	I0729 12:04:05.809869  241491 ssh_runner.go:195] Run: crio --version
	I0729 12:04:05.839164  241491 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:04:05.840545  241491 main.go:141] libmachine: (addons-631322) Calling .GetIP
	I0729 12:04:05.843203  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:05.843542  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:05.843571  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:05.843779  241491 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:04:05.847712  241491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:04:05.859477  241491 kubeadm.go:883] updating cluster {Name:addons-631322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-631322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:04:05.859598  241491 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:04:05.859640  241491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:04:05.891120  241491 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 12:04:05.891188  241491 ssh_runner.go:195] Run: which lz4
	I0729 12:04:05.895074  241491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 12:04:05.899082  241491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 12:04:05.899109  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 12:04:07.244805  241491 crio.go:462] duration metric: took 1.34975623s to copy over tarball
	I0729 12:04:07.244874  241491 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 12:04:09.449245  241491 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204347368s)
	I0729 12:04:09.449270  241491 crio.go:469] duration metric: took 2.204436281s to extract the tarball
	I0729 12:04:09.449277  241491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 12:04:09.487216  241491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:04:09.527467  241491 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:04:09.527492  241491 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:04:09.527501  241491 kubeadm.go:934] updating node { 192.168.39.55 8443 v1.30.3 crio true true} ...
	I0729 12:04:09.527608  241491 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-631322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-631322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:04:09.527671  241491 ssh_runner.go:195] Run: crio config
	I0729 12:04:09.572260  241491 cni.go:84] Creating CNI manager for ""
	I0729 12:04:09.572281  241491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:04:09.572290  241491 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:04:09.572313  241491 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.55 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-631322 NodeName:addons-631322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:04:09.572445  241491 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-631322"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:04:09.572509  241491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:04:09.581991  241491 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:04:09.582047  241491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 12:04:09.590703  241491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 12:04:09.607019  241491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:04:09.622406  241491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0729 12:04:09.638046  241491 ssh_runner.go:195] Run: grep 192.168.39.55	control-plane.minikube.internal$ /etc/hosts
	I0729 12:04:09.641777  241491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:04:09.652863  241491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:04:09.770311  241491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:04:09.787820  241491 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322 for IP: 192.168.39.55
	I0729 12:04:09.787842  241491 certs.go:194] generating shared ca certs ...
	I0729 12:04:09.787859  241491 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:09.787988  241491 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:04:09.853058  241491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt ...
	I0729 12:04:09.853088  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt: {Name:mke27a0eb0127502de013bd52c09e0c1c581ed26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:09.853247  241491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key ...
	I0729 12:04:09.853257  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key: {Name:mk3457a6f2487a1a6f1af779557867a2e01c1eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:09.853328  241491 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:04:09.940681  241491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt ...
	I0729 12:04:09.940709  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt: {Name:mkbc859dc9196fd104e55851409846d48b5b049b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:09.940884  241491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key ...
	I0729 12:04:09.940895  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key: {Name:mk7a5c8af9586bdc26928dc16bf94e44d413be49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:09.940963  241491 certs.go:256] generating profile certs ...
	I0729 12:04:09.941016  241491 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.key
	I0729 12:04:09.941029  241491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt with IP's: []
	I0729 12:04:10.056586  241491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt ...
	I0729 12:04:10.056616  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: {Name:mk70b0764140f92cd0a8ee2100ee1cfaeceaab30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.056782  241491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.key ...
	I0729 12:04:10.056805  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.key: {Name:mk03a9aeff9bb7e7dbf216d4adf4ceb122674215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.056878  241491 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key.7242e505
	I0729 12:04:10.056896  241491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt.7242e505 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.55]
	I0729 12:04:10.215699  241491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt.7242e505 ...
	I0729 12:04:10.215732  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt.7242e505: {Name:mk83c38842f5bab27670a51e22fe8f97c2e52472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.215922  241491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key.7242e505 ...
	I0729 12:04:10.215938  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key.7242e505: {Name:mk787c1efcd4cdbe1f1e99afc46e8fdfdb1326dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.216028  241491 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt.7242e505 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt
	I0729 12:04:10.216111  241491 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key.7242e505 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key
	I0729 12:04:10.216155  241491 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.key
	I0729 12:04:10.216172  241491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.crt with IP's: []
	I0729 12:04:10.478758  241491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.crt ...
	I0729 12:04:10.478794  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.crt: {Name:mk04af66d23a52e124de575b89f10821e6f919ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.478952  241491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.key ...
	I0729 12:04:10.478967  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.key: {Name:mk5c9ea11c36bde78f317789122b8285064035f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.479128  241491 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:04:10.479162  241491 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:04:10.479187  241491 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:04:10.479209  241491 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:04:10.479830  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:04:10.504907  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:04:10.536699  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:04:10.568519  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:04:10.591439  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 12:04:10.613685  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:04:10.636137  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:04:10.658469  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:04:10.685140  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:04:10.708879  241491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:04:10.725295  241491 ssh_runner.go:195] Run: openssl version
	I0729 12:04:10.730896  241491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:04:10.741235  241491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:04:10.745337  241491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:04:10.745388  241491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:04:10.750981  241491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:04:10.761199  241491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:04:10.765402  241491 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 12:04:10.765457  241491 kubeadm.go:392] StartCluster: {Name:addons-631322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-631322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:04:10.765580  241491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:04:10.765634  241491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:04:10.799344  241491 cri.go:89] found id: ""
	I0729 12:04:10.799413  241491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 12:04:10.809520  241491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 12:04:10.818994  241491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 12:04:10.828297  241491 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 12:04:10.828319  241491 kubeadm.go:157] found existing configuration files:
	
	I0729 12:04:10.828357  241491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 12:04:10.836896  241491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 12:04:10.836951  241491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 12:04:10.846501  241491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 12:04:10.855322  241491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 12:04:10.855366  241491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 12:04:10.864439  241491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 12:04:10.872910  241491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 12:04:10.872958  241491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 12:04:10.882003  241491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 12:04:10.890964  241491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 12:04:10.891025  241491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 12:04:10.900184  241491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 12:04:10.957294  241491 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 12:04:10.957383  241491 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 12:04:11.101216  241491 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 12:04:11.101359  241491 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 12:04:11.101501  241491 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 12:04:11.299180  241491 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 12:04:11.381248  241491 out.go:204]   - Generating certificates and keys ...
	I0729 12:04:11.381355  241491 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 12:04:11.381449  241491 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 12:04:11.615376  241491 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 12:04:11.804184  241491 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 12:04:12.031773  241491 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 12:04:12.363667  241491 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 12:04:12.475072  241491 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 12:04:12.475348  241491 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-631322 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I0729 12:04:12.551828  241491 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 12:04:12.552079  241491 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-631322 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I0729 12:04:12.598982  241491 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 12:04:13.069678  241491 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 12:04:13.305223  241491 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 12:04:13.305435  241491 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 12:04:13.506561  241491 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 12:04:13.719055  241491 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 12:04:14.130043  241491 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 12:04:14.218646  241491 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 12:04:14.662079  241491 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 12:04:14.662601  241491 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 12:04:14.664855  241491 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 12:04:14.666904  241491 out.go:204]   - Booting up control plane ...
	I0729 12:04:14.667030  241491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 12:04:14.667156  241491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 12:04:14.667260  241491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 12:04:14.682528  241491 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 12:04:14.685057  241491 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 12:04:14.685112  241491 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 12:04:14.813844  241491 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 12:04:14.813963  241491 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 12:04:15.315097  241491 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.872753ms
	I0729 12:04:15.315264  241491 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 12:04:20.318390  241491 kubeadm.go:310] [api-check] The API server is healthy after 5.00371171s
	I0729 12:04:20.331615  241491 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 12:04:20.342822  241491 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 12:04:20.367194  241491 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 12:04:20.367387  241491 kubeadm.go:310] [mark-control-plane] Marking the node addons-631322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 12:04:20.378333  241491 kubeadm.go:310] [bootstrap-token] Using token: x2rsx0.x2zgacijylh4bb28
	I0729 12:04:20.379755  241491 out.go:204]   - Configuring RBAC rules ...
	I0729 12:04:20.379893  241491 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 12:04:20.387631  241491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 12:04:20.393747  241491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 12:04:20.396968  241491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 12:04:20.400076  241491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 12:04:20.403313  241491 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 12:04:20.726776  241491 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 12:04:21.169414  241491 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 12:04:21.724226  241491 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 12:04:21.724256  241491 kubeadm.go:310] 
	I0729 12:04:21.724322  241491 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 12:04:21.724345  241491 kubeadm.go:310] 
	I0729 12:04:21.724426  241491 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 12:04:21.724444  241491 kubeadm.go:310] 
	I0729 12:04:21.724492  241491 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 12:04:21.724577  241491 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 12:04:21.724660  241491 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 12:04:21.724668  241491 kubeadm.go:310] 
	I0729 12:04:21.724740  241491 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 12:04:21.724757  241491 kubeadm.go:310] 
	I0729 12:04:21.724863  241491 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 12:04:21.724872  241491 kubeadm.go:310] 
	I0729 12:04:21.724914  241491 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 12:04:21.725017  241491 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 12:04:21.725107  241491 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 12:04:21.725127  241491 kubeadm.go:310] 
	I0729 12:04:21.725252  241491 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 12:04:21.725362  241491 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 12:04:21.725373  241491 kubeadm.go:310] 
	I0729 12:04:21.725478  241491 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x2rsx0.x2zgacijylh4bb28 \
	I0729 12:04:21.725643  241491 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 \
	I0729 12:04:21.725675  241491 kubeadm.go:310] 	--control-plane 
	I0729 12:04:21.725684  241491 kubeadm.go:310] 
	I0729 12:04:21.725799  241491 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 12:04:21.725809  241491 kubeadm.go:310] 
	I0729 12:04:21.725932  241491 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x2rsx0.x2zgacijylh4bb28 \
	I0729 12:04:21.726071  241491 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 
	I0729 12:04:21.726213  241491 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 12:04:21.726253  241491 cni.go:84] Creating CNI manager for ""
	I0729 12:04:21.726269  241491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:04:21.728007  241491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 12:04:21.729263  241491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 12:04:21.739807  241491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 12:04:21.757304  241491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 12:04:21.757392  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:21.757403  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-631322 minikube.k8s.io/updated_at=2024_07_29T12_04_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=addons-631322 minikube.k8s.io/primary=true
	I0729 12:04:21.788741  241491 ops.go:34] apiserver oom_adj: -16
	I0729 12:04:21.887541  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:22.387817  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:22.887639  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:23.388478  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:23.887659  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:24.388598  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:24.887881  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:25.387920  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:25.887994  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:26.388506  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:26.888134  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:27.387644  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:27.887692  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:28.388241  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:28.888114  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:29.387934  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:29.887701  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:30.387663  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:30.888542  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:31.388585  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:31.887801  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:32.387988  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:32.888588  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:33.388072  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:33.887865  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:34.387562  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:34.888132  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:34.967324  241491 kubeadm.go:1113] duration metric: took 13.209992637s to wait for elevateKubeSystemPrivileges
	I0729 12:04:34.967364  241491 kubeadm.go:394] duration metric: took 24.201913797s to StartCluster
	I0729 12:04:34.967389  241491 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:34.967521  241491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:04:34.967960  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:34.968182  241491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 12:04:34.968217  241491 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:04:34.968284  241491 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 12:04:34.968421  241491 addons.go:69] Setting yakd=true in profile "addons-631322"
	I0729 12:04:34.968425  241491 addons.go:69] Setting helm-tiller=true in profile "addons-631322"
	I0729 12:04:34.968445  241491 addons.go:69] Setting inspektor-gadget=true in profile "addons-631322"
	I0729 12:04:34.968462  241491 addons.go:234] Setting addon yakd=true in "addons-631322"
	I0729 12:04:34.968466  241491 addons.go:234] Setting addon inspektor-gadget=true in "addons-631322"
	I0729 12:04:34.968469  241491 addons.go:234] Setting addon helm-tiller=true in "addons-631322"
	I0729 12:04:34.968460  241491 addons.go:69] Setting ingress-dns=true in profile "addons-631322"
	I0729 12:04:34.968479  241491 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-631322"
	I0729 12:04:34.968498  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968505  241491 addons.go:69] Setting registry=true in profile "addons-631322"
	I0729 12:04:34.968508  241491 config.go:182] Loaded profile config "addons-631322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:04:34.968521  241491 addons.go:234] Setting addon registry=true in "addons-631322"
	I0729 12:04:34.968521  241491 addons.go:69] Setting metrics-server=true in profile "addons-631322"
	I0729 12:04:34.968524  241491 addons.go:69] Setting default-storageclass=true in profile "addons-631322"
	I0729 12:04:34.968538  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968542  241491 addons.go:234] Setting addon metrics-server=true in "addons-631322"
	I0729 12:04:34.968548  241491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-631322"
	I0729 12:04:34.968510  241491 addons.go:69] Setting ingress=true in profile "addons-631322"
	I0729 12:04:34.968584  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968598  241491 addons.go:234] Setting addon ingress=true in "addons-631322"
	I0729 12:04:34.968610  241491 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-631322"
	I0729 12:04:34.968633  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968642  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968498  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968433  241491 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-631322"
	I0729 12:04:34.969031  241491 addons.go:69] Setting gcp-auth=true in profile "addons-631322"
	I0729 12:04:34.969042  241491 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-631322"
	I0729 12:04:34.969045  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969054  241491 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-631322"
	I0729 12:04:34.969055  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969063  241491 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-631322"
	I0729 12:04:34.969067  241491 addons.go:69] Setting storage-provisioner=true in profile "addons-631322"
	I0729 12:04:34.969020  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969077  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969084  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969087  241491 addons.go:234] Setting addon storage-provisioner=true in "addons-631322"
	I0729 12:04:34.969100  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969109  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969175  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969210  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969228  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969252  241491 addons.go:69] Setting volcano=true in profile "addons-631322"
	I0729 12:04:34.969263  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969275  241491 addons.go:234] Setting addon volcano=true in "addons-631322"
	I0729 12:04:34.968516  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969291  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969315  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969027  241491 addons.go:69] Setting cloud-spanner=true in profile "addons-631322"
	I0729 12:04:34.969376  241491 addons.go:69] Setting volumesnapshots=true in profile "addons-631322"
	I0729 12:04:34.969056  241491 mustload.go:65] Loading cluster: addons-631322
	I0729 12:04:34.969387  241491 addons.go:234] Setting addon cloud-spanner=true in "addons-631322"
	I0729 12:04:34.969399  241491 addons.go:234] Setting addon volumesnapshots=true in "addons-631322"
	I0729 12:04:34.969280  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.968502  241491 addons.go:234] Setting addon ingress-dns=true in "addons-631322"
	I0729 12:04:34.969584  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969600  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969610  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969642  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969648  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969699  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969721  241491 config.go:182] Loaded profile config "addons-631322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:04:34.969773  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969887  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969905  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969957  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969964  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969975  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969986  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.970020  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.970029  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.970062  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.970071  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.970080  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.970088  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.970091  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.970125  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.970154  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.970598  241491 out.go:177] * Verifying Kubernetes components...
	I0729 12:04:34.972138  241491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:04:34.988249  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0729 12:04:34.988935  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39823
	I0729 12:04:34.989135  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0729 12:04:35.000524  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.000582  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.000851  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.001274  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.001491  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.001516  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.001586  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.002103  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.002127  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.002263  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.002282  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.002336  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.002685  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.003142  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.003186  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.006604  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.006971  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.007004  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.007197  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.007249  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.015628  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41805
	I0729 12:04:35.016242  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.016865  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.016884  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.017290  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.017916  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.017957  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.019068  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44121
	I0729 12:04:35.019605  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.020165  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.020182  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.020577  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.020848  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.024722  241491 addons.go:234] Setting addon default-storageclass=true in "addons-631322"
	I0729 12:04:35.024765  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:35.025335  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.025375  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.025991  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0729 12:04:35.026447  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.026947  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.026965  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.027295  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.027875  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.027910  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.032272  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0729 12:04:35.032865  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.033321  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.033337  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.033754  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.034329  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.034369  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.035032  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46049
	I0729 12:04:35.035479  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.035992  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.036011  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.036350  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.036883  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.036922  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.043035  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
	I0729 12:04:35.043284  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0729 12:04:35.043847  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.043963  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I0729 12:04:35.044600  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.044621  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.045050  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.045261  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.045717  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.045750  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.046407  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.046426  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.047096  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.047707  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.047727  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.047960  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39543
	I0729 12:04:35.048149  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0729 12:04:35.048295  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.048868  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.048903  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.048979  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0729 12:04:35.049109  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.049149  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.049404  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.049545  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.049558  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.049629  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.049694  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.050010  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.050582  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.050621  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.050902  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.050916  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.051046  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.051056  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.051416  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.051482  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.051972  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.051997  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.052235  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.052888  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.052932  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.054049  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0729 12:04:35.054395  241491 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 12:04:35.054505  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.054586  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I0729 12:04:35.054958  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.055188  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.055206  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.055517  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.055543  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.055609  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.055697  241491 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 12:04:35.055720  241491 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 12:04:35.055740  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.055958  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.056129  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.056175  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.056207  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.058878  241491 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-631322"
	I0729 12:04:35.058924  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:35.059276  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.059305  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.060368  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.060814  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.060838  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.061127  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.061317  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.061516  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.061637  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.062853  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0729 12:04:35.063335  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.063858  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.063876  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.064260  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.064467  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.066875  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34689
	I0729 12:04:35.067393  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.067943  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.067962  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.068354  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.068611  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.069797  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:35.070183  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.070218  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.071631  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.073674  241491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 12:04:35.075044  241491 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 12:04:35.075065  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 12:04:35.075084  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.078248  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.078617  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.078638  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.078871  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.079111  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.079321  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.079476  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.080508  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41477
	I0729 12:04:35.080685  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0729 12:04:35.081213  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.081816  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.081835  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.082242  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.082499  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.083338  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.083995  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.084014  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.084444  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.084715  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.085721  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.086857  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.087894  241491 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 12:04:35.088730  241491 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 12:04:35.090420  241491 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 12:04:35.090442  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 12:04:35.090462  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.090523  241491 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 12:04:35.091781  241491 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 12:04:35.091806  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 12:04:35.091827  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.094111  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38339
	I0729 12:04:35.094965  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.095566  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.095700  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.095724  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.096254  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.096315  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.096330  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.096498  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.096758  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.096986  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.097177  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.097482  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.097761  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.097807  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42175
	I0729 12:04:35.097832  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.097847  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.097963  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.100242  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I0729 12:04:35.100287  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 12:04:35.100373  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.100417  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.100374  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I0729 12:04:35.100478  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0729 12:04:35.101174  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.101254  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.101253  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.101271  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.101330  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.101508  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.101789  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.102191  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.102255  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0729 12:04:35.102342  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.102356  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.102451  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.102626  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.102640  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.102715  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.102750  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I0729 12:04:35.102780  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.103212  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.103216  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.103225  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.103268  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.103270  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.103295  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.103308  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.103469  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.103665  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.103716  241491 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 12:04:35.103795  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.103823  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.103861  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.104000  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.104011  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.104359  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.104544  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.104716  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I0729 12:04:35.104952  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.104966  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.105066  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.105093  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.105387  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.105509  241491 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 12:04:35.105527  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 12:04:35.105544  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.105636  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.105885  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.106290  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.106531  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.106597  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.106925  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.106944  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.107000  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.107373  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.107617  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.108556  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.108662  241491 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 12:04:35.108770  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 12:04:35.108813  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.110605  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.111162  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.111196  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.111373  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.111566  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.111687  241491 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 12:04:35.111744  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.111814  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.111839  241491 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0729 12:04:35.111861  241491 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 12:04:35.112456  241491 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 12:04:35.112475  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.112094  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.113515  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 12:04:35.113539  241491 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 12:04:35.113872  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 12:04:35.113891  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.114197  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 12:04:35.114998  241491 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 12:04:35.115318  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0729 12:04:35.115746  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 12:04:35.115764  241491 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 12:04:35.115782  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.116505  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.116563  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 12:04:35.116980  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.117008  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.117271  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.117756  241491 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 12:04:35.117819  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.117837  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.118342  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.118903  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.118934  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.118937  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 12:04:35.118968  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I0729 12:04:35.119251  241491 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 12:04:35.119274  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 12:04:35.119292  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.119324  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.119250  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.119259  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.119519  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.119658  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.119939  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.119962  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.119951  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.120056  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.120304  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.120351  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.120559  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.120564  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.120826  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.120852  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.121157  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.121490  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.121568  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 12:04:35.122223  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.122823  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.122848  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.122867  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.122987  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.122998  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.123069  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.123343  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.123636  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.123837  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.123876  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.123947  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 12:04:35.124143  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.124699  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.124998  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:35.125015  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:35.125274  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:35.125291  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:35.125306  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:35.125318  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:35.125318  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33113
	I0729 12:04:35.125747  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:35.125763  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:35.125779  241491 main.go:141] libmachine: () Calling .GetVersion
	W0729 12:04:35.125828  241491 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 12:04:35.126129  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.126536  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 12:04:35.127323  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.127341  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.127651  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.127812  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.127867  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0729 12:04:35.128067  241491 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 12:04:35.128777  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.129225  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.129244  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.129368  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 12:04:35.129444  241491 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 12:04:35.129461  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 12:04:35.129481  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.129946  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.130700  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.130700  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.130973  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 12:04:35.130987  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 12:04:35.131003  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.132392  241491 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 12:04:35.133197  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.133384  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0729 12:04:35.133610  241491 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 12:04:35.133631  241491 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 12:04:35.133647  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.133743  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.134159  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.134180  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.134548  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.134713  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.134791  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.135026  241491 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 12:04:35.135150  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.135169  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.135181  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.135400  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.135592  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.135728  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.135915  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.136191  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.136207  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.136479  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.136652  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.136843  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.137011  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.137032  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.137231  241491 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 12:04:35.137248  241491 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 12:04:35.137265  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.137389  241491 out.go:177]   - Using image docker.io/busybox:stable
	I0729 12:04:35.137964  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.138291  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.138307  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.138477  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.138620  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.138732  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.138860  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.139205  241491 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 12:04:35.139221  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 12:04:35.139236  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.140355  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.140898  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.140919  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.141109  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.141284  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.141477  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.141660  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.142474  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.142901  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.142913  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.143096  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.143247  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.143362  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.143498  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	W0729 12:04:35.165779  241491 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36328->192.168.39.55:22: read: connection reset by peer
	I0729 12:04:35.165820  241491 retry.go:31] will retry after 350.545163ms: ssh: handshake failed: read tcp 192.168.39.1:36328->192.168.39.55:22: read: connection reset by peer
	I0729 12:04:35.433680  241491 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 12:04:35.433708  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 12:04:35.447859  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 12:04:35.457905  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 12:04:35.486330  241491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:04:35.486395  241491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 12:04:35.515313  241491 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 12:04:35.515343  241491 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 12:04:35.519239  241491 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 12:04:35.519266  241491 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 12:04:35.569398  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 12:04:35.612308  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 12:04:35.624135  241491 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 12:04:35.624160  241491 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 12:04:35.627666  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 12:04:35.634551  241491 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 12:04:35.634571  241491 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 12:04:35.645654  241491 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 12:04:35.645672  241491 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 12:04:35.657007  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 12:04:35.657027  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 12:04:35.661735  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 12:04:35.710852  241491 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 12:04:35.710875  241491 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 12:04:35.735471  241491 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 12:04:35.735493  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 12:04:35.749072  241491 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 12:04:35.749099  241491 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 12:04:35.824370  241491 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 12:04:35.824396  241491 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 12:04:35.888976  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 12:04:35.889004  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 12:04:35.915194  241491 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 12:04:35.915223  241491 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 12:04:35.922827  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 12:04:35.924583  241491 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 12:04:35.924610  241491 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 12:04:36.040432  241491 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 12:04:36.040462  241491 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 12:04:36.043091  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 12:04:36.057180  241491 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 12:04:36.057202  241491 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 12:04:36.067221  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 12:04:36.107030  241491 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 12:04:36.107084  241491 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 12:04:36.123660  241491 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 12:04:36.123692  241491 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 12:04:36.140680  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 12:04:36.140713  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 12:04:36.193178  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 12:04:36.240756  241491 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 12:04:36.240781  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 12:04:36.248443  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 12:04:36.248475  241491 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 12:04:36.288508  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 12:04:36.288534  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 12:04:36.329772  241491 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 12:04:36.329805  241491 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 12:04:36.546200  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 12:04:36.563455  241491 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 12:04:36.563478  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 12:04:36.657833  241491 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 12:04:36.657864  241491 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 12:04:36.669090  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 12:04:36.669115  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 12:04:36.815978  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 12:04:36.880575  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 12:04:36.880599  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 12:04:36.955276  241491 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 12:04:36.955300  241491 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 12:04:37.117764  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 12:04:37.117797  241491 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 12:04:37.177645  241491 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 12:04:37.177681  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 12:04:37.372778  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 12:04:37.372826  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 12:04:37.449701  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 12:04:37.501248  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 12:04:37.501272  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 12:04:37.735423  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 12:04:37.735456  241491 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 12:04:38.045253  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 12:04:39.547306  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.099410205s)
	I0729 12:04:39.547359  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.547371  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.547367  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.089426656s)
	I0729 12:04:39.547415  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.547425  241491 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.061004467s)
	I0729 12:04:39.547454  241491 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.061097534s)
	I0729 12:04:39.547453  241491 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 12:04:39.547527  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.978100223s)
	I0729 12:04:39.547556  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.547575  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.547432  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.548477  241491 node_ready.go:35] waiting up to 6m0s for node "addons-631322" to be "Ready" ...
	I0729 12:04:39.549806  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.549811  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.549819  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.549813  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.549831  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.549834  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.549832  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.549840  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.549844  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.549840  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.549865  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.549850  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.549851  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.549875  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.549952  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.550170  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.550187  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.550174  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.550231  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.550231  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.550207  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.550257  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.550261  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.550272  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.557120  241491 node_ready.go:49] node "addons-631322" has status "Ready":"True"
	I0729 12:04:39.557141  241491 node_ready.go:38] duration metric: took 8.642126ms for node "addons-631322" to be "Ready" ...
	I0729 12:04:39.557151  241491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:04:39.573461  241491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kr89x" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:40.063744  241491 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-631322" context rescaled to 1 replicas
	I0729 12:04:41.587429  241491 pod_ready.go:102] pod "coredns-7db6d8ff4d-kr89x" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:42.148385  241491 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 12:04:42.148440  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:42.151679  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:42.152138  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:42.152169  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:42.152334  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:42.152562  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:42.152764  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:42.152937  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:42.584682  241491 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 12:04:42.702635  241491 pod_ready.go:92] pod "coredns-7db6d8ff4d-kr89x" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:42.702661  241491 pod_ready.go:81] duration metric: took 3.129176071s for pod "coredns-7db6d8ff4d-kr89x" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:42.702671  241491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:42.794235  241491 addons.go:234] Setting addon gcp-auth=true in "addons-631322"
	I0729 12:04:42.794295  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:42.794608  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:42.794636  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:42.811033  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36855
	I0729 12:04:42.811492  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:42.811942  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:42.811966  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:42.812350  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:42.812893  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:42.812925  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:42.828956  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0729 12:04:42.829406  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:42.829877  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:42.829902  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:42.830275  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:42.830507  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:42.832046  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:42.832326  241491 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 12:04:42.832352  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:42.835302  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:42.835724  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:42.835751  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:42.835899  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:42.836095  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:42.836253  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:42.836377  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:43.676676  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.064327203s)
	I0729 12:04:43.676735  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.676747  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.676808  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.04909102s)
	I0729 12:04:43.676867  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.015110392s)
	I0729 12:04:43.676895  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.676908  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.676866  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.676950  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.676945  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.754079078s)
	I0729 12:04:43.676984  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677002  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677032  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.677043  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.677052  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677060  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677124  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.677132  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.677141  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677147  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677155  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.634034481s)
	I0729 12:04:43.677187  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677200  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677274  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.610030183s)
	I0729 12:04:43.677292  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677299  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677365  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.484161305s)
	I0729 12:04:43.677384  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677395  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677463  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.131235328s)
	I0729 12:04:43.677480  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677490  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677618  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.861600082s)
	W0729 12:04:43.677647  241491 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 12:04:43.677674  241491 retry.go:31] will retry after 373.115146ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 12:04:43.677796  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.228035637s)
	I0729 12:04:43.677823  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677834  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677942  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.677960  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.677963  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.677990  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.677993  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.677998  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.678004  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.678010  241491 addons.go:475] Verifying addon ingress=true in "addons-631322"
	I0729 12:04:43.678061  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.678139  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.678148  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.678175  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.678182  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.678191  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.678197  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.678248  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.678266  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.678271  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.678279  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.678285  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.678012  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.678113  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.678094  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.679534  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.679545  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.679552  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.680817  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.680823  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.680830  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.680839  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.680846  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.680853  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.680875  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.680895  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.680900  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.680909  241491 addons.go:475] Verifying addon metrics-server=true in "addons-631322"
	I0729 12:04:43.681062  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.681088  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.681094  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.681102  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.681109  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.681163  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.681168  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.681193  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.681198  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.681203  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.681206  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.681211  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.681217  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.680847  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.681657  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.681686  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.681693  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.682146  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.682163  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.682173  241491 addons.go:475] Verifying addon registry=true in "addons-631322"
	I0729 12:04:43.682708  241491 out.go:177] * Verifying ingress addon...
	I0729 12:04:43.682734  241491 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-631322 service yakd-dashboard -n yakd-dashboard
	
	I0729 12:04:43.683694  241491 out.go:177] * Verifying registry addon...
	I0729 12:04:43.682847  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.684286  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.682867  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.685696  241491 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 12:04:43.685757  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.685788  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.685801  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.686395  241491 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 12:04:43.699861  241491 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 12:04:43.699883  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:43.700250  241491 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 12:04:43.700267  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:43.710848  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.710871  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.711200  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.711224  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 12:04:43.711346  241491 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0729 12:04:43.729266  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.729298  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.729636  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.729641  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.729659  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:44.051662  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 12:04:44.190251  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:44.191649  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:44.696963  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:44.696984  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:44.726322  241491 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:45.208892  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:45.209042  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:45.729574  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:45.730680  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:45.819039  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.773719865s)
	I0729 12:04:45.819045  241491 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.986693401s)
	I0729 12:04:45.819107  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:45.819164  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:45.819491  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:45.819557  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:45.819575  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:45.819587  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:45.819525  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:45.819933  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:45.819950  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:45.819982  241491 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-631322"
	I0729 12:04:45.820708  241491 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 12:04:45.821543  241491 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 12:04:45.822813  241491 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 12:04:45.823750  241491 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 12:04:45.823872  241491 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 12:04:45.823892  241491 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 12:04:45.845539  241491 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 12:04:45.845563  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:45.976753  241491 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 12:04:45.976785  241491 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 12:04:46.032473  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.98075893s)
	I0729 12:04:46.032531  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:46.032541  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:46.032948  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:46.032976  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:46.032985  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:46.032994  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:46.033002  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:46.033375  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:46.033397  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:46.033413  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:46.068897  241491 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 12:04:46.068924  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 12:04:46.123660  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 12:04:46.192329  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:46.192904  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:46.331911  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:46.692963  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:46.693566  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:46.829766  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:47.214496  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:47.214677  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:47.251947  241491 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:47.306070  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.182359347s)
	I0729 12:04:47.306128  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:47.306139  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:47.306466  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:47.306522  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:47.306537  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:47.306545  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:47.306886  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:47.306901  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:47.306915  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:47.308583  241491 addons.go:475] Verifying addon gcp-auth=true in "addons-631322"
	I0729 12:04:47.310221  241491 out.go:177] * Verifying gcp-auth addon...
	I0729 12:04:47.312496  241491 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 12:04:47.326494  241491 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 12:04:47.326517  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:47.345624  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:47.692358  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:47.694535  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:47.826637  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:47.834554  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:48.191118  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:48.193228  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:48.316327  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:48.330133  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:48.691238  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:48.691918  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:48.816814  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:48.828993  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:49.194268  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:49.195939  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:49.315987  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:49.329152  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:49.692457  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:49.692715  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:49.708099  241491 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:49.815977  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:49.829906  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:50.190475  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:50.190620  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:50.316866  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:50.330713  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:50.691443  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:50.691949  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:50.816495  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:50.828939  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:51.192140  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:51.193220  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:51.316320  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:51.328417  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:51.694599  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:51.697217  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:51.709108  241491 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:51.816259  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:51.830420  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:52.194727  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:52.195373  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:52.208346  241491 pod_ready.go:97] pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.55 HostIPs:[{IP:192.168.39.
55}] PodIP: PodIPs:[] StartTime:2024-07-29 12:04:35 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 12:04:41 +0000 UTC,FinishedAt:2024-07-29 12:04:51 +0000 UTC,ContainerID:cri-o://1a825b52954dd55e54fadee5c88b544c2af81bfd3086a276b8dc866a111abc82,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://1a825b52954dd55e54fadee5c88b544c2af81bfd3086a276b8dc866a111abc82 Started:0xc0022a23c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 12:04:52.208381  241491 pod_ready.go:81] duration metric: took 9.505702586s for pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace to be "Ready" ...
	E0729 12:04:52.208395  241491 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.55 HostIPs:[{IP:192.168.39.55}] PodIP: PodIPs:[] StartTime:2024-07-29 12:04:35 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 12:04:41 +0000 UTC,FinishedAt:2024-07-29 12:04:51 +0000 UTC,ContainerID:cri-o://1a825b52954dd55e54fadee5c88b544c2af81bfd3086a276b8dc866a111abc82,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://1a825b52954dd55e54fadee5c88b544c2af81bfd3086a276b8dc866a111abc82 Started:0xc0022a23c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 12:04:52.208405  241491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.212489  241491 pod_ready.go:92] pod "etcd-addons-631322" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:52.212510  241491 pod_ready.go:81] duration metric: took 4.09539ms for pod "etcd-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.212522  241491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.218174  241491 pod_ready.go:92] pod "kube-apiserver-addons-631322" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:52.218195  241491 pod_ready.go:81] duration metric: took 5.665997ms for pod "kube-apiserver-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.218208  241491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.225050  241491 pod_ready.go:92] pod "kube-controller-manager-addons-631322" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:52.225070  241491 pod_ready.go:81] duration metric: took 6.854586ms for pod "kube-controller-manager-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.225084  241491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fp2hh" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.230399  241491 pod_ready.go:92] pod "kube-proxy-fp2hh" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:52.230419  241491 pod_ready.go:81] duration metric: took 5.327391ms for pod "kube-proxy-fp2hh" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.230431  241491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.316447  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:52.328434  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:52.605825  241491 pod_ready.go:92] pod "kube-scheduler-addons-631322" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:52.605849  241491 pod_ready.go:81] duration metric: took 375.412476ms for pod "kube-scheduler-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.605859  241491 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.692519  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:52.694347  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:52.815470  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:52.832673  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:53.190792  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:53.191462  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:53.316242  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:53.328338  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:53.690989  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:53.691043  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:53.816101  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:53.834176  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:54.190234  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:54.193578  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:54.316680  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:54.329022  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:54.612173  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:54.689971  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:54.691199  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:54.816083  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:54.828507  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:55.192572  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:55.192701  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:55.316823  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:55.329406  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:55.692475  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:55.692554  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:55.816984  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:55.830135  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:56.194285  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:56.195622  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:56.316734  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:56.334145  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:56.613221  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:56.690701  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:56.693975  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:56.816005  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:56.830119  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:57.192678  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:57.192997  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:57.317445  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:57.329198  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:57.691687  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:57.697745  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:57.817484  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:57.828890  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:58.189208  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:58.191591  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:58.316682  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:58.329258  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:58.690894  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:58.691232  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:58.816686  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:58.829181  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:59.111151  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:59.190122  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:59.192025  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:59.315658  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:59.328755  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:59.692475  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:59.692639  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:59.816914  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:59.831407  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:00.189773  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:00.191773  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:00.315978  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:00.329596  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:00.690048  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:00.691692  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:00.817568  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:00.829040  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:01.111888  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:05:01.190808  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:01.191667  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:01.316826  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:01.329279  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:01.694022  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:01.694464  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:01.816696  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:01.829508  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:02.190999  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:02.191485  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:02.316260  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:02.328267  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:02.691173  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:02.692440  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:02.816057  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:02.830309  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:03.112051  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:05:03.190466  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:03.192026  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:03.315956  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:03.329630  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:03.689836  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:03.691261  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:03.816306  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:03.828943  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:04.266191  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:04.270289  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:04.316066  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:04.330314  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:04.691110  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:04.692075  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:04.817005  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:04.836774  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:05.112387  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:05:05.191403  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:05.192754  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:05.316472  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:05.328521  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:05.693813  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:05.694077  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:05.817258  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:05.828676  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:06.191984  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:06.192746  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:06.633597  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:06.638912  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:06.692064  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:06.692491  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:06.815747  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:06.829483  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:07.192691  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:07.192898  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:07.316536  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:07.329583  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:07.611745  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:05:07.692152  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:07.692651  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:07.816678  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:07.829187  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:08.111939  241491 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"True"
	I0729 12:05:08.111961  241491 pod_ready.go:81] duration metric: took 15.506095853s for pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace to be "Ready" ...
	I0729 12:05:08.111989  241491 pod_ready.go:38] duration metric: took 28.554818245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:05:08.112006  241491 api_server.go:52] waiting for apiserver process to appear ...
	I0729 12:05:08.112055  241491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:05:08.131111  241491 api_server.go:72] duration metric: took 33.162863602s to wait for apiserver process to appear ...
	I0729 12:05:08.131136  241491 api_server.go:88] waiting for apiserver healthz status ...
	I0729 12:05:08.131162  241491 api_server.go:253] Checking apiserver healthz at https://192.168.39.55:8443/healthz ...
	I0729 12:05:08.135210  241491 api_server.go:279] https://192.168.39.55:8443/healthz returned 200:
	ok
	I0729 12:05:08.136742  241491 api_server.go:141] control plane version: v1.30.3
	I0729 12:05:08.136762  241491 api_server.go:131] duration metric: took 5.62042ms to wait for apiserver health ...
	I0729 12:05:08.136770  241491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:05:08.148121  241491 system_pods.go:59] 18 kube-system pods found
	I0729 12:05:08.148146  241491 system_pods.go:61] "coredns-7db6d8ff4d-kr89x" [d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef] Running
	I0729 12:05:08.148154  241491 system_pods.go:61] "csi-hostpath-attacher-0" [e09927aa-20b1-40f7-ab75-fa9174452e6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 12:05:08.148161  241491 system_pods.go:61] "csi-hostpath-resizer-0" [ab998654-3f6a-44cd-974c-011bed87cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 12:05:08.148167  241491 system_pods.go:61] "csi-hostpathplugin-kklhd" [b8cf1b29-7f6d-42f2-9ff3-42552849b06f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 12:05:08.148172  241491 system_pods.go:61] "etcd-addons-631322" [edef4d1d-878d-41c8-9f1f-5f905891bb1c] Running
	I0729 12:05:08.148177  241491 system_pods.go:61] "kube-apiserver-addons-631322" [dd83ef61-2e39-4360-a7c3-4e39579177c6] Running
	I0729 12:05:08.148181  241491 system_pods.go:61] "kube-controller-manager-addons-631322" [6d69ce90-11a1-4768-94f6-c42861eddc35] Running
	I0729 12:05:08.148189  241491 system_pods.go:61] "kube-ingress-dns-minikube" [ed104c0c-e54d-49f9-a443-bfeafe4cd1ef] Running
	I0729 12:05:08.148192  241491 system_pods.go:61] "kube-proxy-fp2hh" [02cf9a19-5834-400f-a520-406afe4dba9c] Running
	I0729 12:05:08.148199  241491 system_pods.go:61] "kube-scheduler-addons-631322" [c9f210fd-eaf2-49d3-b379-0ad0f1f2b54b] Running
	I0729 12:05:08.148203  241491 system_pods.go:61] "metrics-server-c59844bb4-5ckgn" [635ee934-5845-4b41-b592-e16cd7ca050a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 12:05:08.148211  241491 system_pods.go:61] "nvidia-device-plugin-daemonset-m8p57" [0f635111-3024-43e1-bb48-73600f90a010] Running
	I0729 12:05:08.148217  241491 system_pods.go:61] "registry-656c9c8d9c-n8scc" [01e3eb64-3cfb-4c8e-885d-d83fc4087b8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 12:05:08.148222  241491 system_pods.go:61] "registry-proxy-74lcm" [24d73911-de6a-48f4-94d5-427b8aabe740] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 12:05:08.148229  241491 system_pods.go:61] "snapshot-controller-745499f584-v67xh" [3ca7bbdd-71f4-4b73-81d2-43a1c496b3f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 12:05:08.148235  241491 system_pods.go:61] "snapshot-controller-745499f584-z8fzs" [c6a55e4c-6022-48ae-81b6-392c34013809] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 12:05:08.148239  241491 system_pods.go:61] "storage-provisioner" [1db38ec0-1c47-4390-9f77-8348dbc84682] Running
	I0729 12:05:08.148244  241491 system_pods.go:61] "tiller-deploy-6677d64bcd-sngfl" [9dcb8698-4a1e-4840-be97-c1bd6d3fd69a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 12:05:08.148251  241491 system_pods.go:74] duration metric: took 11.475234ms to wait for pod list to return data ...
	I0729 12:05:08.148260  241491 default_sa.go:34] waiting for default service account to be created ...
	I0729 12:05:08.150080  241491 default_sa.go:45] found service account: "default"
	I0729 12:05:08.150096  241491 default_sa.go:55] duration metric: took 1.830977ms for default service account to be created ...
	I0729 12:05:08.150102  241491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 12:05:08.159660  241491 system_pods.go:86] 18 kube-system pods found
	I0729 12:05:08.159686  241491 system_pods.go:89] "coredns-7db6d8ff4d-kr89x" [d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef] Running
	I0729 12:05:08.159696  241491 system_pods.go:89] "csi-hostpath-attacher-0" [e09927aa-20b1-40f7-ab75-fa9174452e6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 12:05:08.159702  241491 system_pods.go:89] "csi-hostpath-resizer-0" [ab998654-3f6a-44cd-974c-011bed87cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 12:05:08.159710  241491 system_pods.go:89] "csi-hostpathplugin-kklhd" [b8cf1b29-7f6d-42f2-9ff3-42552849b06f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 12:05:08.159716  241491 system_pods.go:89] "etcd-addons-631322" [edef4d1d-878d-41c8-9f1f-5f905891bb1c] Running
	I0729 12:05:08.159721  241491 system_pods.go:89] "kube-apiserver-addons-631322" [dd83ef61-2e39-4360-a7c3-4e39579177c6] Running
	I0729 12:05:08.159725  241491 system_pods.go:89] "kube-controller-manager-addons-631322" [6d69ce90-11a1-4768-94f6-c42861eddc35] Running
	I0729 12:05:08.159731  241491 system_pods.go:89] "kube-ingress-dns-minikube" [ed104c0c-e54d-49f9-a443-bfeafe4cd1ef] Running
	I0729 12:05:08.159737  241491 system_pods.go:89] "kube-proxy-fp2hh" [02cf9a19-5834-400f-a520-406afe4dba9c] Running
	I0729 12:05:08.159742  241491 system_pods.go:89] "kube-scheduler-addons-631322" [c9f210fd-eaf2-49d3-b379-0ad0f1f2b54b] Running
	I0729 12:05:08.159748  241491 system_pods.go:89] "metrics-server-c59844bb4-5ckgn" [635ee934-5845-4b41-b592-e16cd7ca050a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 12:05:08.159757  241491 system_pods.go:89] "nvidia-device-plugin-daemonset-m8p57" [0f635111-3024-43e1-bb48-73600f90a010] Running
	I0729 12:05:08.159762  241491 system_pods.go:89] "registry-656c9c8d9c-n8scc" [01e3eb64-3cfb-4c8e-885d-d83fc4087b8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 12:05:08.159769  241491 system_pods.go:89] "registry-proxy-74lcm" [24d73911-de6a-48f4-94d5-427b8aabe740] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 12:05:08.159776  241491 system_pods.go:89] "snapshot-controller-745499f584-v67xh" [3ca7bbdd-71f4-4b73-81d2-43a1c496b3f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 12:05:08.159785  241491 system_pods.go:89] "snapshot-controller-745499f584-z8fzs" [c6a55e4c-6022-48ae-81b6-392c34013809] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 12:05:08.159791  241491 system_pods.go:89] "storage-provisioner" [1db38ec0-1c47-4390-9f77-8348dbc84682] Running
	I0729 12:05:08.159796  241491 system_pods.go:89] "tiller-deploy-6677d64bcd-sngfl" [9dcb8698-4a1e-4840-be97-c1bd6d3fd69a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 12:05:08.159802  241491 system_pods.go:126] duration metric: took 9.695717ms to wait for k8s-apps to be running ...
	I0729 12:05:08.159824  241491 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 12:05:08.159868  241491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:05:08.175772  241491 system_svc.go:56] duration metric: took 15.939846ms WaitForService to wait for kubelet
	I0729 12:05:08.175801  241491 kubeadm.go:582] duration metric: took 33.20755649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:05:08.175822  241491 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:05:08.178747  241491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:05:08.178772  241491 node_conditions.go:123] node cpu capacity is 2
	I0729 12:05:08.178786  241491 node_conditions.go:105] duration metric: took 2.959334ms to run NodePressure ...
	I0729 12:05:08.178798  241491 start.go:241] waiting for startup goroutines ...
	I0729 12:05:08.190942  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:08.191959  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:08.316221  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:08.330455  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:08.693379  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:08.693498  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:08.816832  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:08.830006  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:09.190682  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:09.192430  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:09.316884  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:09.330356  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:09.698091  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:09.698220  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:09.817131  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:09.829537  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:10.190773  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:10.190980  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:10.316353  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:10.329085  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:10.690659  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:10.691512  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:10.817188  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:10.829853  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:11.190260  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:11.192320  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:11.316336  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:11.328934  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:11.692930  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:11.693136  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:11.816803  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:11.829359  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:12.191551  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:12.192651  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:12.316526  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:12.329408  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:12.692673  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:12.704178  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:12.816243  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:12.828543  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:13.191014  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:13.192776  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:13.315638  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:13.329035  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:13.691868  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:13.694016  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:13.816521  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:13.830131  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:14.190504  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:14.191602  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:14.317223  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:14.330798  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:15.050155  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:15.052852  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:15.053161  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:15.053596  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:15.238095  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:15.240787  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:15.316876  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:15.329315  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:15.691970  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:15.694359  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:15.823680  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:15.828601  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:16.190285  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:16.192112  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:16.315891  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:16.329343  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:16.690185  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:16.692637  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:16.816784  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:16.833950  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:17.192043  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:17.194073  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:17.316816  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:17.329932  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:17.691981  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:17.693247  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:17.816505  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:17.829349  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:18.192362  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:18.192674  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:18.316312  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:18.328867  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:18.690266  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:18.691975  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:18.816097  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:18.828208  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:19.190046  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:19.191489  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:19.316567  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:19.328847  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:19.691160  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:19.691218  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:19.816126  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:19.829340  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:20.189847  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:20.191039  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:20.316417  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:20.328504  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:20.690312  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:20.691597  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:20.817325  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:20.828288  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:21.189844  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:21.192150  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:21.316049  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:21.330998  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:21.690945  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:21.691557  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:21.816514  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:21.829857  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:22.192081  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:22.192117  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:22.316357  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:22.331438  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:22.689591  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:22.691972  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:22.815996  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:22.829352  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:23.190928  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:23.191212  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:23.316456  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:23.330047  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:23.691397  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:23.692621  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:23.816513  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:23.829112  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:24.190244  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:24.191537  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:24.316829  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:24.331586  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:24.690639  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:24.692067  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:24.816317  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:24.829571  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:25.189965  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:25.191271  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:25.316380  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:25.328912  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:25.696854  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:25.697143  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:25.816890  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:25.829157  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:26.192806  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:26.192911  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:26.318005  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:26.331251  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:26.691026  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:26.691126  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:26.816244  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:26.829179  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:27.192046  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:27.192335  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:27.316253  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:27.328835  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:27.690175  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:27.691798  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:27.818354  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:27.829258  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:28.190034  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:28.191390  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:28.317842  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:28.330837  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:28.690275  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:28.691255  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:28.816630  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:28.829256  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:29.189827  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:29.191168  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:29.316071  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:29.328477  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:29.690343  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:29.691836  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:29.817056  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:29.828696  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:30.190387  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:30.190717  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:30.317121  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:30.329671  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:30.691294  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:30.691472  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:30.816745  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:30.833535  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:31.192155  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:31.193556  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:31.316717  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:31.329352  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:31.691438  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:31.691523  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:31.817065  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:31.830243  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:32.193234  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:32.198143  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:32.317007  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:32.336224  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:32.733778  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:32.735806  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:32.816673  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:32.828943  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:33.191431  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:33.193056  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:33.316017  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:33.328514  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:33.691826  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:33.693171  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:33.816224  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:33.830046  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:34.191771  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:34.192215  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:34.316698  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:34.330086  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:34.690684  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:34.691643  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:34.817370  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:34.828610  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:35.525413  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:35.540456  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:35.541760  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:35.543399  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:35.690999  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:35.691185  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:35.817067  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:35.832760  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:36.190192  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:36.196667  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:36.316779  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:36.330403  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:36.690236  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:36.692049  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:36.816470  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:36.829186  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:37.189688  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:37.191666  241491 kapi.go:107] duration metric: took 53.505272119s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 12:05:37.318027  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:37.330913  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:37.691127  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:37.817220  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:37.829766  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:38.190790  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:38.317331  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:38.328871  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:38.690882  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:38.816647  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:38.829905  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:39.189681  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:39.316658  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:39.329394  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:39.691185  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:39.819024  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:39.829757  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:40.190117  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:40.315974  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:40.328861  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:40.690449  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:40.816082  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:40.830156  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:41.192307  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:41.316104  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:41.328477  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:41.689906  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:41.817639  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:41.829742  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:42.190408  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:42.316080  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:42.330211  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:42.690730  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:42.817037  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:42.829413  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:43.190798  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:43.316721  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:43.329777  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:43.689712  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:43.816256  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:43.829122  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:44.190526  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:44.316579  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:44.329806  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:44.690213  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:44.817058  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:44.829684  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:45.189779  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:45.316339  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:45.328656  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:45.690168  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:45.815828  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:45.830437  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:46.192322  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:46.316917  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:46.329692  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:46.690751  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:46.817618  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:46.832965  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:47.190763  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:47.470384  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:47.470733  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:47.690363  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:47.816088  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:47.831066  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:48.191584  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:48.317708  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:48.330899  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:48.690438  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:48.816357  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:48.828457  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:49.190527  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:49.316281  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:49.328176  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:49.689971  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:49.816505  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:49.829006  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:50.190378  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:50.316439  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:50.329015  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:50.690281  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:50.816430  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:50.830851  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:51.190561  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:51.316812  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:51.328954  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:51.691452  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:51.816038  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:51.830019  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:52.190991  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:52.316772  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:52.332123  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:52.690932  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:52.816139  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:52.831022  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:53.190118  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:53.315383  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:53.328703  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:53.690930  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:53.816457  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:53.828733  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:54.189751  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:54.316457  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:54.328842  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:54.690653  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:54.816606  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:54.829351  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:55.189651  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:55.316426  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:55.332885  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:55.692856  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:55.815909  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:55.829486  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:56.190623  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:56.315973  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:56.329317  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:56.690010  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:56.816507  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:56.833979  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:57.190741  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:57.316274  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:57.328320  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:57.692071  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:57.819253  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:57.833028  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:58.189459  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:58.317652  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:58.328606  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:58.691335  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:58.816311  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:58.829292  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:59.190013  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:59.316550  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:59.329071  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:59.690561  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:59.816172  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:59.828384  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:00.190224  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:00.320613  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:00.330019  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:00.689732  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:00.816297  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:00.828824  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:01.494432  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:01.495888  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:01.515272  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:01.694378  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:01.816080  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:01.828280  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:02.189721  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:02.316547  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:02.328264  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:02.690662  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:02.816104  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:02.828538  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:03.189886  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:03.317444  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:03.340143  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:03.691157  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:03.816047  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:03.829412  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:04.191171  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:04.315919  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:04.329178  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:04.690462  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:04.817063  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:04.830110  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:05.196546  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:05.317155  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:05.328902  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:05.690561  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:05.818375  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:05.835684  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:06.190523  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:06.316080  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:06.328177  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:06.695260  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:06.816784  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:06.830615  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:07.189730  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:07.316390  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:07.328571  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:07.690070  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:07.816821  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:07.829479  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:08.190336  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:08.318256  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:08.329510  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:08.690546  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:08.816569  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:08.828620  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:09.193008  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:09.316434  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:09.329274  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:09.690437  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:09.816945  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:09.829442  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:10.615542  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:10.616744  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:10.618278  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:10.690327  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:10.816413  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:10.834615  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:11.190597  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:11.316610  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:11.328771  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:11.690869  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:11.816584  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:11.829444  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:12.194664  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:12.316422  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:12.329487  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:12.691236  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:12.815938  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:12.834241  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:13.190056  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:13.323033  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:13.332046  241491 kapi.go:107] duration metric: took 1m27.508290896s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 12:06:13.690671  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:13.816093  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:14.190966  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:14.316507  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:14.691111  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:14.817480  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:15.191185  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:15.316064  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:15.690881  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:15.817059  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:16.190441  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:16.316487  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:16.690972  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:16.817161  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:17.190645  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:17.316766  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:17.690508  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:17.816235  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:18.190649  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:18.316456  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:18.691106  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:18.816138  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:19.192869  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:19.316845  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:19.690338  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:19.815507  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:20.191638  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:20.316314  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:20.690754  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:20.817207  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:21.190533  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:21.318120  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:21.690915  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:21.816061  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:22.190401  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:22.315891  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:22.689866  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:22.817269  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:23.191052  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:23.317209  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:23.690117  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:23.816697  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:24.191064  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:24.316504  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:24.690547  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:24.816193  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:25.190695  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:25.316591  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:25.691021  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:25.817103  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:26.190846  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:26.316977  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:26.690596  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:26.816899  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:27.190276  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:27.315983  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:27.690458  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:27.816010  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:28.190559  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:28.316638  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:28.691520  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:28.816990  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:29.190891  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:29.316613  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:29.691466  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:29.816734  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:30.189734  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:30.316746  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:30.690601  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:30.816693  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:31.190843  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:31.316725  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:31.690432  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:31.816231  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:32.190988  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:32.317082  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:32.690083  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:32.817008  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:33.190468  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:33.316193  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:33.692005  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:33.817554  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:34.190226  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:34.316836  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:34.690141  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:34.816962  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:35.190216  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:35.315943  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:35.690597  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:35.816203  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:36.190630  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:36.317887  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:36.690303  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:36.816065  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:37.191289  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:37.316111  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:37.690401  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:37.816095  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:38.191168  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:38.315911  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:38.690014  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:38.816202  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:39.189770  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:39.316576  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:39.691167  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:39.816942  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:40.190145  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:40.315944  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:40.689989  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:40.816950  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:41.190489  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:41.318211  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:41.690687  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:41.816173  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:42.190745  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:42.316141  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:42.690147  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:42.815808  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:43.190117  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:43.315831  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:43.689924  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:43.815723  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:44.191171  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:44.317458  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:44.691806  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:44.817163  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:45.190479  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:45.316096  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:45.690993  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:45.817262  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:46.191300  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:46.317312  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:46.690736  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:46.817256  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:47.190655  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:47.316147  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:47.690225  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:47.815924  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:48.189898  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:48.317612  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:48.692401  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:48.816621  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:49.190516  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:49.316577  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:49.690812  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:49.816682  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:50.190854  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:50.316984  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:50.690514  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:50.816127  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:51.190788  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:51.316891  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:51.690126  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:51.815622  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:52.191061  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:52.316914  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:52.689901  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:52.816655  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:53.190825  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:53.316470  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:53.691279  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:53.816142  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:54.190908  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:54.316412  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:54.691044  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:54.817149  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:55.191176  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:55.315864  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:55.690513  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:55.816124  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:56.191005  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:56.317779  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:56.690154  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:56.815765  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:57.190260  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:57.316113  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:57.692405  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:57.817487  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:58.194315  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:58.315545  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:58.690582  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:58.816492  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:59.190423  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:59.315945  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:59.690116  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:59.816254  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:00.197132  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:00.316754  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:00.691207  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:00.815994  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:01.190720  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:01.317650  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:01.690040  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:01.816556  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:02.190765  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:02.316719  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:02.689845  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:02.816540  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:03.191315  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:03.316281  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:03.690529  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:03.816047  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:04.191177  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:04.316659  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:04.694100  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:04.816828  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:05.193884  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:05.317337  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:05.690723  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:05.816452  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:06.567016  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:06.567294  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:06.692157  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:06.816158  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:07.189998  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:07.316903  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:07.691154  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:07.815950  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:08.192386  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:08.321263  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:08.769351  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:09.055973  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:09.193116  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:09.317175  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:09.693523  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:09.816135  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:10.190406  241491 kapi.go:107] duration metric: took 2m26.504709624s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 12:07:10.316310  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:10.816264  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:11.316903  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:11.816362  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:12.316604  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:12.816140  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:13.316618  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:13.821625  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:14.316424  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:14.816208  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:15.316642  241491 kapi.go:107] duration metric: took 2m28.00414682s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 12:07:15.318590  241491 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-631322 cluster.
	I0729 12:07:15.319990  241491 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 12:07:15.321385  241491 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 12:07:15.323147  241491 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, helm-tiller, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 12:07:15.324417  241491 addons.go:510] duration metric: took 2m40.356141624s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server helm-tiller yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 12:07:15.324461  241491 start.go:246] waiting for cluster config update ...
	I0729 12:07:15.324481  241491 start.go:255] writing updated cluster config ...
	I0729 12:07:15.324741  241491 ssh_runner.go:195] Run: rm -f paused
	I0729 12:07:15.376478  241491 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 12:07:15.378201  241491 out.go:177] * Done! kubectl is now configured to use "addons-631322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.867549818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255051867525375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47e25694-3a1d-4999-845f-efcbba1d152c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.868266538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7e1880c-ffe1-449f-999f-78d20cf7055f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.868341947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7e1880c-ffe1-449f-999f-78d20cf7055f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.870178750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeb506ebad23e579c32a34b6ed74d617de715334a823707fa8892e5be989d06d,PodSandboxId:cb02b74aaacead2f0ef1f84e55d0f9e3060215739375a80054c3faac212ef987,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722255045876457704,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-gks46,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b899b274-8d19-46a4-8c01-0d036b3673f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9a3a1,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4288c06debbccbf482e986922ea83a69b419795eb7beda8b3580a275e03d93,PodSandboxId:f4a1507010891a551569b6c1429734d6a3823d566eacf4922322cb84bc810b7d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722254940900728668,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f7h5m,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab0fd630-85fb-4a20-9d29-abe07d251a64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 79624eb1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9528646d04ce9dea650837a283758fbaa7fcb6310253454fb7481f2ca1b76f1,PodSandboxId:7e4a70225bfc98bf3d97c98460af934b0a9605e5283721c520c562ebe38f584d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722254904506567824,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5920ddfd-ff15-402c-bf7c-8b1f9591b455,},Annotations:map[string]string{io.kubernetes.container.hash: 5fe8171,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9dc9fcabacb6f7c1374f2a780ead1dc62796f077374f45e61565f05921326fb,PodSandboxId:2826518d0876d160a2c9dc207c5bd2474dcd7c6909372b026349bc649d21eda0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722254843087594326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a129df2-ed1b-450f-8973-43601739163b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ec12179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad07f3df11b626302014e6609f2371a7e6068abe18e3bca287745a46c571e1f,PodSandboxId:f58eca62c44a2cb34d6ba65b875de3aa3e382b07c8e5f8bd79204e7d880c5d8e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722254758484062276,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ljfg6,io.kube
rnetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75c6dec3-1323-4d05-bdec-4acd460085d8,},Annotations:map[string]string{io.kubernetes.container.hash: e2480a23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978edbef1d26364b5710a9f3a37efb4e1fe94cf23436f2bfd04c4af1ff13e17a,PodSandboxId:d0163bbddbc95492f8a59b68ee813c7593e6c2dd29fcaaca66aab778b83d8aab,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722254757866471878,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingres
s-nginx-admission-create-j25dq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3ff205b-3518-41f7-8fde-514ffe949c69,},Annotations:map[string]string{io.kubernetes.container.hash: bd402fa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886,PodSandboxId:a776831ecd944111f46251bf0771aec3efd83ec07b0e08b901ff973ef601655b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722254717898909555,Labels:map[string]string{io.kubernetes.container.n
ame: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5ckgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 635ee934-5845-4b41-b592-e16cd7ca050a,},Annotations:map[string]string{io.kubernetes.container.hash: daec34bb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce0435c87268ba17b265e3a13650802a9e3ca598dc724bac205c2de6e3c4d93,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722254682265900463,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d217b1229e1c133c37ba176a7bd91ea4ed0d0d4bda1dd88565332df357407d24,PodSandboxId:66da728d5e7b04693d0cf4161ac5178228bae4d7034a36b3f27ef72407bed429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172
2254681096904423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kr89x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef,},Annotations:map[string]string{io.kubernetes.container.hash: 792af7aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55022b7395a488f0fd588d0653108db346a81bcae1f44db3b2d05be8712a4bdf,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254680744098489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5341ae2d216014521265ad07071eefe3458dfc8c304669e6ea8cb58ca3e824,PodSandboxId:bb587bf7205caa4e53f071e84638666a0b5a0bab0c1b8bbb44fd1496661030b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784
d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254678887607358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fp2hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02cf9a19-5834-400f-a520-406afe4dba9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8022a85f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df940fb1a7f53edf98b0e5f14080f07a0a8dd980d3700f18a712e565cec5b591,PodSandboxId:75b5841d4238f49f8c1f520af3c49be593e03822d3d694ba9f778f01568b7c9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254655845696728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112922532fc2751ee1435086e09f044d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390f0d98bf88b606b99570e9b443a5d4b2c3274a3e2194b7631381ac9de814a0,PodSandboxId:65699b4b1d01d94c4b79ef3fb21a8ee1640942ba388402f8b8a38a6eaaa03c72,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254655838597899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f8b3510f968e4c13bd7a9e85352278,},Annotations:map[string]string{io.kubernetes.container.hash: 92057ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cd83442f243c5e0d4a116a9feadb062cd45580e09370ba70ecc21fec28b1f4,PodSandboxId:36fa1f60dada38d120f0aae168be4757feeca72e805f5b7dfa94c16065ac0df2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254655817416641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6ea7ff5086f4730a2c156e8bde3484,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c99c28f8c244cfea9a64565c9b329d18dfd498d40892f1cc76609af13ccf52,PodSandboxId:19b45a3ca0106486aa547c23f93caf6928b6b3f91f18945ebed9c7c2deb97604,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3
edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254655751083325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbda3c748cc325186b62b37a762002d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7e1880c-ffe1-449f-999f-78d20cf7055f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.918981519Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27903617-8a4b-421d-9ff1-181e7823c66e name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.919073293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27903617-8a4b-421d-9ff1-181e7823c66e name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.920148831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80419908-54b9-40e5-a634-443e39790155 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.921383280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255051921359742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80419908-54b9-40e5-a634-443e39790155 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.922241838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e4197c5-15da-4224-96aa-5bfd5b15f590 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.922341539Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e4197c5-15da-4224-96aa-5bfd5b15f590 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.922737143Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeb506ebad23e579c32a34b6ed74d617de715334a823707fa8892e5be989d06d,PodSandboxId:cb02b74aaacead2f0ef1f84e55d0f9e3060215739375a80054c3faac212ef987,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722255045876457704,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-gks46,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b899b274-8d19-46a4-8c01-0d036b3673f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9a3a1,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4288c06debbccbf482e986922ea83a69b419795eb7beda8b3580a275e03d93,PodSandboxId:f4a1507010891a551569b6c1429734d6a3823d566eacf4922322cb84bc810b7d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722254940900728668,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f7h5m,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab0fd630-85fb-4a20-9d29-abe07d251a64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 79624eb1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9528646d04ce9dea650837a283758fbaa7fcb6310253454fb7481f2ca1b76f1,PodSandboxId:7e4a70225bfc98bf3d97c98460af934b0a9605e5283721c520c562ebe38f584d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722254904506567824,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5920ddfd-ff15-402c-bf7c-8b1f9591b455,},Annotations:map[string]string{io.kubernetes.container.hash: 5fe8171,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9dc9fcabacb6f7c1374f2a780ead1dc62796f077374f45e61565f05921326fb,PodSandboxId:2826518d0876d160a2c9dc207c5bd2474dcd7c6909372b026349bc649d21eda0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722254843087594326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a129df2-ed1b-450f-8973-43601739163b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ec12179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad07f3df11b626302014e6609f2371a7e6068abe18e3bca287745a46c571e1f,PodSandboxId:f58eca62c44a2cb34d6ba65b875de3aa3e382b07c8e5f8bd79204e7d880c5d8e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722254758484062276,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ljfg6,io.kube
rnetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75c6dec3-1323-4d05-bdec-4acd460085d8,},Annotations:map[string]string{io.kubernetes.container.hash: e2480a23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978edbef1d26364b5710a9f3a37efb4e1fe94cf23436f2bfd04c4af1ff13e17a,PodSandboxId:d0163bbddbc95492f8a59b68ee813c7593e6c2dd29fcaaca66aab778b83d8aab,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722254757866471878,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingres
s-nginx-admission-create-j25dq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3ff205b-3518-41f7-8fde-514ffe949c69,},Annotations:map[string]string{io.kubernetes.container.hash: bd402fa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886,PodSandboxId:a776831ecd944111f46251bf0771aec3efd83ec07b0e08b901ff973ef601655b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722254717898909555,Labels:map[string]string{io.kubernetes.container.n
ame: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5ckgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 635ee934-5845-4b41-b592-e16cd7ca050a,},Annotations:map[string]string{io.kubernetes.container.hash: daec34bb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce0435c87268ba17b265e3a13650802a9e3ca598dc724bac205c2de6e3c4d93,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722254682265900463,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d217b1229e1c133c37ba176a7bd91ea4ed0d0d4bda1dd88565332df357407d24,PodSandboxId:66da728d5e7b04693d0cf4161ac5178228bae4d7034a36b3f27ef72407bed429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172
2254681096904423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kr89x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef,},Annotations:map[string]string{io.kubernetes.container.hash: 792af7aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55022b7395a488f0fd588d0653108db346a81bcae1f44db3b2d05be8712a4bdf,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254680744098489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5341ae2d216014521265ad07071eefe3458dfc8c304669e6ea8cb58ca3e824,PodSandboxId:bb587bf7205caa4e53f071e84638666a0b5a0bab0c1b8bbb44fd1496661030b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784
d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254678887607358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fp2hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02cf9a19-5834-400f-a520-406afe4dba9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8022a85f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df940fb1a7f53edf98b0e5f14080f07a0a8dd980d3700f18a712e565cec5b591,PodSandboxId:75b5841d4238f49f8c1f520af3c49be593e03822d3d694ba9f778f01568b7c9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254655845696728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112922532fc2751ee1435086e09f044d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390f0d98bf88b606b99570e9b443a5d4b2c3274a3e2194b7631381ac9de814a0,PodSandboxId:65699b4b1d01d94c4b79ef3fb21a8ee1640942ba388402f8b8a38a6eaaa03c72,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254655838597899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f8b3510f968e4c13bd7a9e85352278,},Annotations:map[string]string{io.kubernetes.container.hash: 92057ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cd83442f243c5e0d4a116a9feadb062cd45580e09370ba70ecc21fec28b1f4,PodSandboxId:36fa1f60dada38d120f0aae168be4757feeca72e805f5b7dfa94c16065ac0df2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254655817416641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6ea7ff5086f4730a2c156e8bde3484,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c99c28f8c244cfea9a64565c9b329d18dfd498d40892f1cc76609af13ccf52,PodSandboxId:19b45a3ca0106486aa547c23f93caf6928b6b3f91f18945ebed9c7c2deb97604,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3
edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254655751083325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbda3c748cc325186b62b37a762002d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e4197c5-15da-4224-96aa-5bfd5b15f590 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.960249517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17e1d1d0-c5ea-4528-aebb-d4f18db3f32c name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.960444718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17e1d1d0-c5ea-4528-aebb-d4f18db3f32c name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.961665532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=848bf19d-bd7b-4381-8a80-4eecd483211c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.963073375Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255051963049239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=848bf19d-bd7b-4381-8a80-4eecd483211c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.963752757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17fcbc4f-1cbb-4130-8a50-d5766e1a6e4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.963877956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17fcbc4f-1cbb-4130-8a50-d5766e1a6e4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.964189935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeb506ebad23e579c32a34b6ed74d617de715334a823707fa8892e5be989d06d,PodSandboxId:cb02b74aaacead2f0ef1f84e55d0f9e3060215739375a80054c3faac212ef987,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722255045876457704,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-gks46,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b899b274-8d19-46a4-8c01-0d036b3673f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9a3a1,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4288c06debbccbf482e986922ea83a69b419795eb7beda8b3580a275e03d93,PodSandboxId:f4a1507010891a551569b6c1429734d6a3823d566eacf4922322cb84bc810b7d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722254940900728668,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f7h5m,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab0fd630-85fb-4a20-9d29-abe07d251a64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 79624eb1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9528646d04ce9dea650837a283758fbaa7fcb6310253454fb7481f2ca1b76f1,PodSandboxId:7e4a70225bfc98bf3d97c98460af934b0a9605e5283721c520c562ebe38f584d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722254904506567824,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5920ddfd-ff15-402c-bf7c-8b1f9591b455,},Annotations:map[string]string{io.kubernetes.container.hash: 5fe8171,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9dc9fcabacb6f7c1374f2a780ead1dc62796f077374f45e61565f05921326fb,PodSandboxId:2826518d0876d160a2c9dc207c5bd2474dcd7c6909372b026349bc649d21eda0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722254843087594326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a129df2-ed1b-450f-8973-43601739163b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ec12179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad07f3df11b626302014e6609f2371a7e6068abe18e3bca287745a46c571e1f,PodSandboxId:f58eca62c44a2cb34d6ba65b875de3aa3e382b07c8e5f8bd79204e7d880c5d8e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722254758484062276,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ljfg6,io.kube
rnetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75c6dec3-1323-4d05-bdec-4acd460085d8,},Annotations:map[string]string{io.kubernetes.container.hash: e2480a23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978edbef1d26364b5710a9f3a37efb4e1fe94cf23436f2bfd04c4af1ff13e17a,PodSandboxId:d0163bbddbc95492f8a59b68ee813c7593e6c2dd29fcaaca66aab778b83d8aab,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722254757866471878,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingres
s-nginx-admission-create-j25dq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3ff205b-3518-41f7-8fde-514ffe949c69,},Annotations:map[string]string{io.kubernetes.container.hash: bd402fa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886,PodSandboxId:a776831ecd944111f46251bf0771aec3efd83ec07b0e08b901ff973ef601655b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722254717898909555,Labels:map[string]string{io.kubernetes.container.n
ame: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5ckgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 635ee934-5845-4b41-b592-e16cd7ca050a,},Annotations:map[string]string{io.kubernetes.container.hash: daec34bb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce0435c87268ba17b265e3a13650802a9e3ca598dc724bac205c2de6e3c4d93,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722254682265900463,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d217b1229e1c133c37ba176a7bd91ea4ed0d0d4bda1dd88565332df357407d24,PodSandboxId:66da728d5e7b04693d0cf4161ac5178228bae4d7034a36b3f27ef72407bed429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172
2254681096904423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kr89x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef,},Annotations:map[string]string{io.kubernetes.container.hash: 792af7aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55022b7395a488f0fd588d0653108db346a81bcae1f44db3b2d05be8712a4bdf,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254680744098489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5341ae2d216014521265ad07071eefe3458dfc8c304669e6ea8cb58ca3e824,PodSandboxId:bb587bf7205caa4e53f071e84638666a0b5a0bab0c1b8bbb44fd1496661030b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784
d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254678887607358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fp2hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02cf9a19-5834-400f-a520-406afe4dba9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8022a85f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df940fb1a7f53edf98b0e5f14080f07a0a8dd980d3700f18a712e565cec5b591,PodSandboxId:75b5841d4238f49f8c1f520af3c49be593e03822d3d694ba9f778f01568b7c9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254655845696728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112922532fc2751ee1435086e09f044d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390f0d98bf88b606b99570e9b443a5d4b2c3274a3e2194b7631381ac9de814a0,PodSandboxId:65699b4b1d01d94c4b79ef3fb21a8ee1640942ba388402f8b8a38a6eaaa03c72,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254655838597899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f8b3510f968e4c13bd7a9e85352278,},Annotations:map[string]string{io.kubernetes.container.hash: 92057ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cd83442f243c5e0d4a116a9feadb062cd45580e09370ba70ecc21fec28b1f4,PodSandboxId:36fa1f60dada38d120f0aae168be4757feeca72e805f5b7dfa94c16065ac0df2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254655817416641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6ea7ff5086f4730a2c156e8bde3484,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c99c28f8c244cfea9a64565c9b329d18dfd498d40892f1cc76609af13ccf52,PodSandboxId:19b45a3ca0106486aa547c23f93caf6928b6b3f91f18945ebed9c7c2deb97604,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3
edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254655751083325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbda3c748cc325186b62b37a762002d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17fcbc4f-1cbb-4130-8a50-d5766e1a6e4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.997590232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97b1fab2-748f-4102-bc6a-db28914244d0 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.997675621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97b1fab2-748f-4102-bc6a-db28914244d0 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:51 addons-631322 crio[681]: time="2024-07-29 12:10:51.999308308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=757a4b98-bcc6-479d-9375-7c2a26a62a93 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:52 addons-631322 crio[681]: time="2024-07-29 12:10:52.000562323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255052000537157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=757a4b98-bcc6-479d-9375-7c2a26a62a93 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:52 addons-631322 crio[681]: time="2024-07-29 12:10:52.001172265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21d33c72-e595-442e-8022-3f109ebcaf21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:52 addons-631322 crio[681]: time="2024-07-29 12:10:52.001248983Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21d33c72-e595-442e-8022-3f109ebcaf21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:52 addons-631322 crio[681]: time="2024-07-29 12:10:52.001567210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeb506ebad23e579c32a34b6ed74d617de715334a823707fa8892e5be989d06d,PodSandboxId:cb02b74aaacead2f0ef1f84e55d0f9e3060215739375a80054c3faac212ef987,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722255045876457704,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-gks46,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b899b274-8d19-46a4-8c01-0d036b3673f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9a3a1,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4288c06debbccbf482e986922ea83a69b419795eb7beda8b3580a275e03d93,PodSandboxId:f4a1507010891a551569b6c1429734d6a3823d566eacf4922322cb84bc810b7d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722254940900728668,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f7h5m,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab0fd630-85fb-4a20-9d29-abe07d251a64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 79624eb1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9528646d04ce9dea650837a283758fbaa7fcb6310253454fb7481f2ca1b76f1,PodSandboxId:7e4a70225bfc98bf3d97c98460af934b0a9605e5283721c520c562ebe38f584d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722254904506567824,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5920ddfd-ff15-402c-bf7c-8b1f9591b455,},Annotations:map[string]string{io.kubernetes.container.hash: 5fe8171,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9dc9fcabacb6f7c1374f2a780ead1dc62796f077374f45e61565f05921326fb,PodSandboxId:2826518d0876d160a2c9dc207c5bd2474dcd7c6909372b026349bc649d21eda0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722254843087594326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a129df2-ed1b-450f-8973-43601739163b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ec12179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad07f3df11b626302014e6609f2371a7e6068abe18e3bca287745a46c571e1f,PodSandboxId:f58eca62c44a2cb34d6ba65b875de3aa3e382b07c8e5f8bd79204e7d880c5d8e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722254758484062276,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ljfg6,io.kube
rnetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75c6dec3-1323-4d05-bdec-4acd460085d8,},Annotations:map[string]string{io.kubernetes.container.hash: e2480a23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978edbef1d26364b5710a9f3a37efb4e1fe94cf23436f2bfd04c4af1ff13e17a,PodSandboxId:d0163bbddbc95492f8a59b68ee813c7593e6c2dd29fcaaca66aab778b83d8aab,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722254757866471878,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingres
s-nginx-admission-create-j25dq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3ff205b-3518-41f7-8fde-514ffe949c69,},Annotations:map[string]string{io.kubernetes.container.hash: bd402fa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886,PodSandboxId:a776831ecd944111f46251bf0771aec3efd83ec07b0e08b901ff973ef601655b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722254717898909555,Labels:map[string]string{io.kubernetes.container.n
ame: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5ckgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 635ee934-5845-4b41-b592-e16cd7ca050a,},Annotations:map[string]string{io.kubernetes.container.hash: daec34bb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce0435c87268ba17b265e3a13650802a9e3ca598dc724bac205c2de6e3c4d93,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722254682265900463,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d217b1229e1c133c37ba176a7bd91ea4ed0d0d4bda1dd88565332df357407d24,PodSandboxId:66da728d5e7b04693d0cf4161ac5178228bae4d7034a36b3f27ef72407bed429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172
2254681096904423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kr89x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef,},Annotations:map[string]string{io.kubernetes.container.hash: 792af7aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55022b7395a488f0fd588d0653108db346a81bcae1f44db3b2d05be8712a4bdf,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254680744098489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5341ae2d216014521265ad07071eefe3458dfc8c304669e6ea8cb58ca3e824,PodSandboxId:bb587bf7205caa4e53f071e84638666a0b5a0bab0c1b8bbb44fd1496661030b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784
d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254678887607358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fp2hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02cf9a19-5834-400f-a520-406afe4dba9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8022a85f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df940fb1a7f53edf98b0e5f14080f07a0a8dd980d3700f18a712e565cec5b591,PodSandboxId:75b5841d4238f49f8c1f520af3c49be593e03822d3d694ba9f778f01568b7c9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254655845696728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112922532fc2751ee1435086e09f044d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390f0d98bf88b606b99570e9b443a5d4b2c3274a3e2194b7631381ac9de814a0,PodSandboxId:65699b4b1d01d94c4b79ef3fb21a8ee1640942ba388402f8b8a38a6eaaa03c72,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254655838597899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f8b3510f968e4c13bd7a9e85352278,},Annotations:map[string]string{io.kubernetes.container.hash: 92057ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cd83442f243c5e0d4a116a9feadb062cd45580e09370ba70ecc21fec28b1f4,PodSandboxId:36fa1f60dada38d120f0aae168be4757feeca72e805f5b7dfa94c16065ac0df2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254655817416641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6ea7ff5086f4730a2c156e8bde3484,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c99c28f8c244cfea9a64565c9b329d18dfd498d40892f1cc76609af13ccf52,PodSandboxId:19b45a3ca0106486aa547c23f93caf6928b6b3f91f18945ebed9c7c2deb97604,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3
edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254655751083325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbda3c748cc325186b62b37a762002d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21d33c72-e595-442e-8022-3f109ebcaf21 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	aeb506ebad23e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        6 seconds ago        Running             hello-world-app           0                   cb02b74aaacea       hello-world-app-6778b5fc9f-gks46
	ee4288c06debb       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        About a minute ago   Running             headlamp                  0                   f4a1507010891       headlamp-7867546754-f7h5m
	f9528646d04ce       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago        Running             nginx                     0                   7e4a70225bfc9       nginx
	a9dc9fcabacb6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago        Running             busybox                   0                   2826518d0876d       busybox
	dad07f3df11b6       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago        Exited              patch                     1                   f58eca62c44a2       ingress-nginx-admission-patch-ljfg6
	978edbef1d263       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago        Exited              create                    0                   d0163bbddbc95       ingress-nginx-admission-create-j25dq
	0f5644a7a58fd       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        5 minutes ago        Running             metrics-server            0                   a776831ecd944       metrics-server-c59844bb4-5ckgn
	cce0435c87268       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             6 minutes ago        Running             storage-provisioner       1                   4ecb5de2617f1       storage-provisioner
	d217b1229e1c1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             6 minutes ago        Running             coredns                   0                   66da728d5e7b0       coredns-7db6d8ff4d-kr89x
	55022b7395a48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             6 minutes ago        Exited              storage-provisioner       0                   4ecb5de2617f1       storage-provisioner
	8c5341ae2d216       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             6 minutes ago        Running             kube-proxy                0                   bb587bf7205ca       kube-proxy-fp2hh
	df940fb1a7f53       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             6 minutes ago        Running             kube-controller-manager   0                   75b5841d4238f       kube-controller-manager-addons-631322
	390f0d98bf88b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             6 minutes ago        Running             etcd                      0                   65699b4b1d01d       etcd-addons-631322
	15cd83442f243       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             6 minutes ago        Running             kube-apiserver            0                   36fa1f60dada3       kube-apiserver-addons-631322
	93c99c28f8c24       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             6 minutes ago        Running             kube-scheduler            0                   19b45a3ca0106       kube-scheduler-addons-631322
	
	
	==> coredns [d217b1229e1c133c37ba176a7bd91ea4ed0d0d4bda1dd88565332df357407d24] <==
	[INFO] 10.244.0.8:38022 - 8040 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00008526s
	[INFO] 10.244.0.8:53962 - 17306 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058472s
	[INFO] 10.244.0.8:53962 - 6548 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000063898s
	[INFO] 10.244.0.8:33416 - 4338 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010187s
	[INFO] 10.244.0.8:33416 - 30960 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061915s
	[INFO] 10.244.0.8:51045 - 64265 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008912s
	[INFO] 10.244.0.8:51045 - 17674 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000057691s
	[INFO] 10.244.0.8:45862 - 44362 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137005s
	[INFO] 10.244.0.8:45862 - 61519 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000028329s
	[INFO] 10.244.0.8:40403 - 6616 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054623s
	[INFO] 10.244.0.8:40403 - 990 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037149s
	[INFO] 10.244.0.8:59377 - 30614 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052023s
	[INFO] 10.244.0.8:59377 - 41623 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000022872s
	[INFO] 10.244.0.8:47916 - 10964 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049728s
	[INFO] 10.244.0.8:47916 - 34775 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000035977s
	[INFO] 10.244.0.22:48402 - 57253 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000633417s
	[INFO] 10.244.0.22:39577 - 50268 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000661586s
	[INFO] 10.244.0.22:42292 - 26778 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001334s
	[INFO] 10.244.0.22:56873 - 9097 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191236s
	[INFO] 10.244.0.22:58338 - 6010 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000725s
	[INFO] 10.244.0.22:49619 - 13408 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064622s
	[INFO] 10.244.0.22:58157 - 47120 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000825776s
	[INFO] 10.244.0.22:48918 - 61652 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0009465s
	[INFO] 10.244.0.25:52737 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000283391s
	[INFO] 10.244.0.25:37171 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000136457s
	
	
	==> describe nodes <==
	Name:               addons-631322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-631322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=addons-631322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_04_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-631322
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:04:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-631322
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:10:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:09:28 +0000   Mon, 29 Jul 2024 12:04:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:09:28 +0000   Mon, 29 Jul 2024 12:04:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:09:28 +0000   Mon, 29 Jul 2024 12:04:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:09:28 +0000   Mon, 29 Jul 2024 12:04:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    addons-631322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 dbfd3884a4b246a2a72c3d23bb089cf3
	  System UUID:                dbfd3884-a4b2-46a2-a72c-3d23bb089cf3
	  Boot ID:                    7ae55269-1cce-42ab-9e04-3fd98ff87fed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  default                     hello-world-app-6778b5fc9f-gks46         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  headlamp                    headlamp-7867546754-f7h5m                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 coredns-7db6d8ff4d-kr89x                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m17s
	  kube-system                 etcd-addons-631322                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-apiserver-addons-631322             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-controller-manager-addons-631322    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-proxy-fp2hh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-scheduler-addons-631322             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 metrics-server-c59844bb4-5ckgn           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m13s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m11s  kube-proxy       
	  Normal  Starting                 6m32s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m31s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m31s  kubelet          Node addons-631322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s  kubelet          Node addons-631322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s  kubelet          Node addons-631322 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m30s  kubelet          Node addons-631322 status is now: NodeReady
	  Normal  RegisteredNode           6m18s  node-controller  Node addons-631322 event: Registered Node addons-631322 in Controller
	
	
	==> dmesg <==
	[  +5.042291] kauditd_printk_skb: 168 callbacks suppressed
	[  +6.463948] kauditd_printk_skb: 84 callbacks suppressed
	[Jul29 12:05] kauditd_printk_skb: 5 callbacks suppressed
	[ +14.677313] kauditd_printk_skb: 4 callbacks suppressed
	[ +33.459945] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.028638] kauditd_printk_skb: 55 callbacks suppressed
	[Jul29 12:06] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.112027] kauditd_printk_skb: 17 callbacks suppressed
	[ +45.192936] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 12:07] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.108107] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.823976] kauditd_printk_skb: 9 callbacks suppressed
	[ +18.316332] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.275601] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.706141] kauditd_printk_skb: 39 callbacks suppressed
	[Jul29 12:08] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.255238] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.048850] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.050065] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.312562] kauditd_printk_skb: 8 callbacks suppressed
	[ +13.277376] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.020295] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.880183] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.193694] kauditd_printk_skb: 16 callbacks suppressed
	[Jul29 12:10] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [390f0d98bf88b606b99570e9b443a5d4b2c3274a3e2194b7631381ac9de814a0] <==
	{"level":"warn","ts":"2024-07-29T12:07:06.544173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.828352ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4365"}
	{"level":"info","ts":"2024-07-29T12:07:06.544223Z","caller":"traceutil/trace.go:171","msg":"trace[1191993287] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1252; }","duration":"245.900821ms","start":"2024-07-29T12:07:06.298314Z","end":"2024-07-29T12:07:06.544215Z","steps":["trace[1191993287] 'agreement among raft nodes before linearized reading'  (duration: 245.790183ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:07:06.544413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.640768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T12:07:06.544461Z","caller":"traceutil/trace.go:171","msg":"trace[1130243268] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1252; }","duration":"115.707354ms","start":"2024-07-29T12:07:06.428744Z","end":"2024-07-29T12:07:06.544451Z","steps":["trace[1130243268] 'agreement among raft nodes before linearized reading'  (duration: 115.644327ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:07:08.750064Z","caller":"traceutil/trace.go:171","msg":"trace[927867220] linearizableReadLoop","detail":"{readStateIndex:1304; appliedIndex:1303; }","duration":"198.50366ms","start":"2024-07-29T12:07:08.551537Z","end":"2024-07-29T12:07:08.750041Z","steps":["trace[927867220] 'read index received'  (duration: 198.340512ms)","trace[927867220] 'applied index is now lower than readState.Index'  (duration: 162.449µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:07:08.750157Z","caller":"traceutil/trace.go:171","msg":"trace[696740883] transaction","detail":"{read_only:false; response_revision:1255; number_of_response:1; }","duration":"207.210253ms","start":"2024-07-29T12:07:08.54294Z","end":"2024-07-29T12:07:08.750151Z","steps":["trace[696740883] 'process raft request'  (duration: 206.993535ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:07:08.75051Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.921978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-29T12:07:08.750555Z","caller":"traceutil/trace.go:171","msg":"trace[597650258] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1255; }","duration":"199.041685ms","start":"2024-07-29T12:07:08.551505Z","end":"2024-07-29T12:07:08.750547Z","steps":["trace[597650258] 'agreement among raft nodes before linearized reading'  (duration: 198.899705ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:07:09.032969Z","caller":"traceutil/trace.go:171","msg":"trace[1476798487] linearizableReadLoop","detail":"{readStateIndex:1305; appliedIndex:1304; }","duration":"234.920667ms","start":"2024-07-29T12:07:08.798033Z","end":"2024-07-29T12:07:09.032953Z","steps":["trace[1476798487] 'read index received'  (duration: 230.43909ms)","trace[1476798487] 'applied index is now lower than readState.Index'  (duration: 4.480852ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:07:09.033469Z","caller":"traceutil/trace.go:171","msg":"trace[1748490560] transaction","detail":"{read_only:false; response_revision:1256; number_of_response:1; }","duration":"277.754772ms","start":"2024-07-29T12:07:08.755702Z","end":"2024-07-29T12:07:09.033457Z","steps":["trace[1748490560] 'process raft request'  (duration: 272.840422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:07:09.03451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.46247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4365"}
	{"level":"info","ts":"2024-07-29T12:07:09.037925Z","caller":"traceutil/trace.go:171","msg":"trace[162234053] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1256; }","duration":"239.905325ms","start":"2024-07-29T12:07:08.798009Z","end":"2024-07-29T12:07:09.037914Z","steps":["trace[162234053] 'agreement among raft nodes before linearized reading'  (duration: 236.415614ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:08:02.428973Z","caller":"traceutil/trace.go:171","msg":"trace[2143638598] linearizableReadLoop","detail":"{readStateIndex:1628; appliedIndex:1627; }","duration":"184.268511ms","start":"2024-07-29T12:08:02.24462Z","end":"2024-07-29T12:08:02.428888Z","steps":["trace[2143638598] 'read index received'  (duration: 184.043907ms)","trace[2143638598] 'applied index is now lower than readState.Index'  (duration: 223.935µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:08:02.429613Z","caller":"traceutil/trace.go:171","msg":"trace[1148497118] transaction","detail":"{read_only:false; response_revision:1559; number_of_response:1; }","duration":"458.297332ms","start":"2024-07-29T12:08:01.971304Z","end":"2024-07-29T12:08:02.429601Z","steps":["trace[1148497118] 'process raft request'  (duration: 457.390154ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:08:02.431922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T12:08:01.971293Z","time spent":"460.379637ms","remote":"127.0.0.1:60164","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1987,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/namespaces/gadget\" mod_revision:1490 > success:<request_put:<key:\"/registry/namespaces/gadget\" value_size:1952 >> failure:<request_range:<key:\"/registry/namespaces/gadget\" > >"}
	{"level":"warn","ts":"2024-07-29T12:08:02.430331Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.682264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11013"}
	{"level":"info","ts":"2024-07-29T12:08:02.432853Z","caller":"traceutil/trace.go:171","msg":"trace[874712766] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1559; }","duration":"188.236637ms","start":"2024-07-29T12:08:02.244594Z","end":"2024-07-29T12:08:02.43283Z","steps":["trace[874712766] 'agreement among raft nodes before linearized reading'  (duration: 185.629326ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:09:00.56862Z","caller":"traceutil/trace.go:171","msg":"trace[279079184] linearizableReadLoop","detail":"{readStateIndex:2051; appliedIndex:2050; }","duration":"141.066707ms","start":"2024-07-29T12:09:00.42752Z","end":"2024-07-29T12:09:00.568587Z","steps":["trace[279079184] 'read index received'  (duration: 140.991583ms)","trace[279079184] 'applied index is now lower than readState.Index'  (duration: 74.258µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:09:00.56878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.232663ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T12:09:00.568853Z","caller":"traceutil/trace.go:171","msg":"trace[1854130269] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1964; }","duration":"141.351652ms","start":"2024-07-29T12:09:00.427493Z","end":"2024-07-29T12:09:00.568844Z","steps":["trace[1854130269] 'agreement among raft nodes before linearized reading'  (duration: 141.227318ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:09:00.568682Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T12:09:00.222052Z","time spent":"346.618783ms","remote":"127.0.0.1:60072","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-07-29T12:09:00.799875Z","caller":"traceutil/trace.go:171","msg":"trace[1991790229] transaction","detail":"{read_only:false; response_revision:1965; number_of_response:1; }","duration":"229.480947ms","start":"2024-07-29T12:09:00.57019Z","end":"2024-07-29T12:09:00.799671Z","steps":["trace[1991790229] 'process raft request'  (duration: 228.418258ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:09:00.800215Z","caller":"traceutil/trace.go:171","msg":"trace[621356182] linearizableReadLoop","detail":"{readStateIndex:2052; appliedIndex:2051; }","duration":"131.618464ms","start":"2024-07-29T12:09:00.667974Z","end":"2024-07-29T12:09:00.799592Z","steps":["trace[621356182] 'read index received'  (duration: 130.57059ms)","trace[621356182] 'applied index is now lower than readState.Index'  (duration: 1.047403ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:09:00.800745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.772177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3607"}
	{"level":"info","ts":"2024-07-29T12:09:00.800776Z","caller":"traceutil/trace.go:171","msg":"trace[1494508932] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1965; }","duration":"132.81481ms","start":"2024-07-29T12:09:00.667952Z","end":"2024-07-29T12:09:00.800767Z","steps":["trace[1494508932] 'agreement among raft nodes before linearized reading'  (duration: 132.638027ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:10:52 up 7 min,  0 users,  load average: 0.59, 0.99, 0.57
	Linux addons-631322 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [15cd83442f243c5e0d4a116a9feadb062cd45580e09370ba70ecc21fec28b1f4] <==
	W0729 12:07:57.842838       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 12:08:09.306026       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 12:08:19.164243       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 12:08:19.409905       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.127.103"}
	E0729 12:08:31.062592       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0729 12:08:45.974061       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 12:08:45.974118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 12:08:46.034128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 12:08:46.034185       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 12:08:46.046036       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 12:08:46.046089       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 12:08:46.057167       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 12:08:46.060867       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 12:08:46.121506       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 12:08:46.121558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0729 12:08:47.046722       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 12:08:47.122433       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0729 12:08:47.122540       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0729 12:08:52.572768       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.33.154"}
	I0729 12:09:00.802153       1 trace.go:236] Trace[371396522]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.55,type:*v1.Endpoints,resource:apiServerIPInfo (29-Jul-2024 12:09:00.220) (total time: 581ms):
	Trace[371396522]: ---"Transaction prepared" 348ms (12:09:00.569)
	Trace[371396522]: ---"Txn call completed" 232ms (12:09:00.802)
	Trace[371396522]: [581.910767ms] [581.910767ms] END
	I0729 12:10:42.196678       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.237.23"}
	E0729 12:10:44.173733       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [df940fb1a7f53edf98b0e5f14080f07a0a8dd980d3700f18a712e565cec5b591] <==
	E0729 12:09:28.365558       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:09:31.383526       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:09:31.383672       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:09:47.817569       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:09:47.817636       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:09:56.689368       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:09:56.689513       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:10:13.577579       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:10:13.577734       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:10:16.901855       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:10:16.901900       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:10:30.569888       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:10:30.569996       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:10:33.236633       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:10:33.236674       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 12:10:42.027352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="34.125243ms"
	I0729 12:10:42.039379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="8.176266ms"
	I0729 12:10:42.039454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="36.81µs"
	I0729 12:10:44.054083       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0729 12:10:44.058306       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="11.514µs"
	I0729 12:10:44.064223       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0729 12:10:46.200277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="10.65943ms"
	I0729 12:10:46.200776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="97.54µs"
	W0729 12:10:48.625600       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:10:48.625739       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [8c5341ae2d216014521265ad07071eefe3458dfc8c304669e6ea8cb58ca3e824] <==
	I0729 12:04:40.600353       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:04:40.627495       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.55"]
	I0729 12:04:40.790265       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:04:40.790323       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:04:40.790341       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:04:40.798179       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:04:40.798369       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:04:40.798398       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:04:40.800633       1 config.go:192] "Starting service config controller"
	I0729 12:04:40.800659       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:04:40.800720       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:04:40.800727       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:04:40.801143       1 config.go:319] "Starting node config controller"
	I0729 12:04:40.801150       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:04:40.901758       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:04:40.901844       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:04:40.901865       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [93c99c28f8c244cfea9a64565c9b329d18dfd498d40892f1cc76609af13ccf52] <==
	W0729 12:04:18.612576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:04:18.613323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:04:18.612695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:04:18.613374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:04:18.612751       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 12:04:18.613422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 12:04:18.612849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 12:04:18.613470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 12:04:18.607462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:04:18.613524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:04:18.615208       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:04:18.615262       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:04:19.495529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:04:19.495629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:04:19.641980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:04:19.643228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:04:19.704090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:04:19.704187       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:04:19.724202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 12:04:19.724383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:04:19.747423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 12:04:19.747509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 12:04:20.037328       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:04:20.037454       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 12:04:22.798275       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:10:21 addons-631322 kubelet[1277]: E0729 12:10:21.059279    1277 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:10:21 addons-631322 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:10:21 addons-631322 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:10:21 addons-631322 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:10:21 addons-631322 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:10:42 addons-631322 kubelet[1277]: I0729 12:10:42.022058    1277 topology_manager.go:215] "Topology Admit Handler" podUID="b899b274-8d19-46a4-8c01-0d036b3673f1" podNamespace="default" podName="hello-world-app-6778b5fc9f-gks46"
	Jul 29 12:10:42 addons-631322 kubelet[1277]: I0729 12:10:42.118369    1277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxf6x\" (UniqueName: \"kubernetes.io/projected/b899b274-8d19-46a4-8c01-0d036b3673f1-kube-api-access-lxf6x\") pod \"hello-world-app-6778b5fc9f-gks46\" (UID: \"b899b274-8d19-46a4-8c01-0d036b3673f1\") " pod="default/hello-world-app-6778b5fc9f-gks46"
	Jul 29 12:10:43 addons-631322 kubelet[1277]: I0729 12:10:43.141345    1277 scope.go:117] "RemoveContainer" containerID="46939c2cda1a0e62a28d1a12f79e1821918d16d1f5b09695c782c44b022f4dc5"
	Jul 29 12:10:43 addons-631322 kubelet[1277]: I0729 12:10:43.161193    1277 scope.go:117] "RemoveContainer" containerID="46939c2cda1a0e62a28d1a12f79e1821918d16d1f5b09695c782c44b022f4dc5"
	Jul 29 12:10:43 addons-631322 kubelet[1277]: E0729 12:10:43.161954    1277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46939c2cda1a0e62a28d1a12f79e1821918d16d1f5b09695c782c44b022f4dc5\": container with ID starting with 46939c2cda1a0e62a28d1a12f79e1821918d16d1f5b09695c782c44b022f4dc5 not found: ID does not exist" containerID="46939c2cda1a0e62a28d1a12f79e1821918d16d1f5b09695c782c44b022f4dc5"
	Jul 29 12:10:43 addons-631322 kubelet[1277]: I0729 12:10:43.162074    1277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46939c2cda1a0e62a28d1a12f79e1821918d16d1f5b09695c782c44b022f4dc5"} err="failed to get container status \"46939c2cda1a0e62a28d1a12f79e1821918d16d1f5b09695c782c44b022f4dc5\": rpc error: code = NotFound desc = could not find container \"46939c2cda1a0e62a28d1a12f79e1821918d16d1f5b09695c782c44b022f4dc5\": container with ID starting with 46939c2cda1a0e62a28d1a12f79e1821918d16d1f5b09695c782c44b022f4dc5 not found: ID does not exist"
	Jul 29 12:10:43 addons-631322 kubelet[1277]: I0729 12:10:43.226012    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx22w\" (UniqueName: \"kubernetes.io/projected/ed104c0c-e54d-49f9-a443-bfeafe4cd1ef-kube-api-access-lx22w\") pod \"ed104c0c-e54d-49f9-a443-bfeafe4cd1ef\" (UID: \"ed104c0c-e54d-49f9-a443-bfeafe4cd1ef\") "
	Jul 29 12:10:43 addons-631322 kubelet[1277]: I0729 12:10:43.228206    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed104c0c-e54d-49f9-a443-bfeafe4cd1ef-kube-api-access-lx22w" (OuterVolumeSpecName: "kube-api-access-lx22w") pod "ed104c0c-e54d-49f9-a443-bfeafe4cd1ef" (UID: "ed104c0c-e54d-49f9-a443-bfeafe4cd1ef"). InnerVolumeSpecName "kube-api-access-lx22w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 12:10:43 addons-631322 kubelet[1277]: I0729 12:10:43.327401    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lx22w\" (UniqueName: \"kubernetes.io/projected/ed104c0c-e54d-49f9-a443-bfeafe4cd1ef-kube-api-access-lx22w\") on node \"addons-631322\" DevicePath \"\""
	Jul 29 12:10:45 addons-631322 kubelet[1277]: I0729 12:10:45.031659    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75c6dec3-1323-4d05-bdec-4acd460085d8" path="/var/lib/kubelet/pods/75c6dec3-1323-4d05-bdec-4acd460085d8/volumes"
	Jul 29 12:10:45 addons-631322 kubelet[1277]: I0729 12:10:45.032430    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3ff205b-3518-41f7-8fde-514ffe949c69" path="/var/lib/kubelet/pods/c3ff205b-3518-41f7-8fde-514ffe949c69/volumes"
	Jul 29 12:10:45 addons-631322 kubelet[1277]: I0729 12:10:45.032911    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed104c0c-e54d-49f9-a443-bfeafe4cd1ef" path="/var/lib/kubelet/pods/ed104c0c-e54d-49f9-a443-bfeafe4cd1ef/volumes"
	Jul 29 12:10:47 addons-631322 kubelet[1277]: I0729 12:10:47.358244    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/821fc9f8-c76d-405b-b6c8-4fbfd4a09af4-webhook-cert\") pod \"821fc9f8-c76d-405b-b6c8-4fbfd4a09af4\" (UID: \"821fc9f8-c76d-405b-b6c8-4fbfd4a09af4\") "
	Jul 29 12:10:47 addons-631322 kubelet[1277]: I0729 12:10:47.358295    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs8fd\" (UniqueName: \"kubernetes.io/projected/821fc9f8-c76d-405b-b6c8-4fbfd4a09af4-kube-api-access-cs8fd\") pod \"821fc9f8-c76d-405b-b6c8-4fbfd4a09af4\" (UID: \"821fc9f8-c76d-405b-b6c8-4fbfd4a09af4\") "
	Jul 29 12:10:47 addons-631322 kubelet[1277]: I0729 12:10:47.361710    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821fc9f8-c76d-405b-b6c8-4fbfd4a09af4-kube-api-access-cs8fd" (OuterVolumeSpecName: "kube-api-access-cs8fd") pod "821fc9f8-c76d-405b-b6c8-4fbfd4a09af4" (UID: "821fc9f8-c76d-405b-b6c8-4fbfd4a09af4"). InnerVolumeSpecName "kube-api-access-cs8fd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 12:10:47 addons-631322 kubelet[1277]: I0729 12:10:47.362339    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821fc9f8-c76d-405b-b6c8-4fbfd4a09af4-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "821fc9f8-c76d-405b-b6c8-4fbfd4a09af4" (UID: "821fc9f8-c76d-405b-b6c8-4fbfd4a09af4"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 29 12:10:47 addons-631322 kubelet[1277]: I0729 12:10:47.458638    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cs8fd\" (UniqueName: \"kubernetes.io/projected/821fc9f8-c76d-405b-b6c8-4fbfd4a09af4-kube-api-access-cs8fd\") on node \"addons-631322\" DevicePath \"\""
	Jul 29 12:10:47 addons-631322 kubelet[1277]: I0729 12:10:47.458675    1277 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/821fc9f8-c76d-405b-b6c8-4fbfd4a09af4-webhook-cert\") on node \"addons-631322\" DevicePath \"\""
	Jul 29 12:10:48 addons-631322 kubelet[1277]: I0729 12:10:48.192917    1277 scope.go:117] "RemoveContainer" containerID="ec4068e97ac77e4bbc34b3b37cabee55927455f98a8958e6e90209cbc9f17979"
	Jul 29 12:10:49 addons-631322 kubelet[1277]: I0729 12:10:49.030659    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821fc9f8-c76d-405b-b6c8-4fbfd4a09af4" path="/var/lib/kubelet/pods/821fc9f8-c76d-405b-b6c8-4fbfd4a09af4/volumes"
	
	
	==> storage-provisioner [55022b7395a488f0fd588d0653108db346a81bcae1f44db3b2d05be8712a4bdf] <==
	I0729 12:04:41.477530       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 12:04:41.484382       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [cce0435c87268ba17b265e3a13650802a9e3ca598dc724bac205c2de6e3c4d93] <==
	I0729 12:04:43.666278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 12:04:43.737051       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 12:04:43.737113       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 12:04:43.750752       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 12:04:43.751078       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-631322_6d727feb-7e6c-4e68-b6d9-3105753fd048!
	I0729 12:04:43.752642       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68d98cb4-4cee-489c-b2b2-baea37fcbb34", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-631322_6d727feb-7e6c-4e68-b6d9-3105753fd048 became leader
	I0729 12:04:43.852129       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-631322_6d727feb-7e6c-4e68-b6d9-3105753fd048!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-631322 -n addons-631322
helpers_test.go:261: (dbg) Run:  kubectl --context addons-631322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (313.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.046334ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-5ckgn" [635ee934-5845-4b41-b592-e16cd7ca050a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.008040631s
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (84.226661ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 3m9.060418497s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (71.048522ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 3m13.246017875s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (79.883835ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 3m19.282365745s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (84.693798ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 3m24.364487763s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (75.536823ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 3m37.549960143s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (68.529343ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 3m53.589118691s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (66.128144ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 4m18.777851156s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (63.419639ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 5m7.908054839s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (66.28043ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 6m15.216896717s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (66.699836ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 6m48.591888892s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (63.070292ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 7m24.265354698s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-631322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-631322 top pods -n kube-system: exit status 1 (64.956755ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kr89x, age: 8m13.851496278s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-631322 -n addons-631322
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-631322 logs -n 25: (1.357732898s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-679044                                                                     | download-only-679044 | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC | 29 Jul 24 12:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-468907 | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC |                     |
	|         | binary-mirror-468907                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44345                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-468907                                                                     | binary-mirror-468907 | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC | 29 Jul 24 12:03 UTC |
	| addons  | disable dashboard -p                                                                        | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC |                     |
	|         | addons-631322                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC |                     |
	|         | addons-631322                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-631322 --wait=true                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC | 29 Jul 24 12:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:08 UTC |
	|         | addons-631322                                                                               |                      |         |         |                     |                     |
	| ip      | addons-631322 ip                                                                            | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-631322 ssh cat                                                                       | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | /opt/local-path-provisioner/pvc-3da48a95-fd4c-467b-9806-616d63c75cdf_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | -p addons-631322                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-631322 ssh curl -s                                                                   | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-631322 addons                                                                        | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-631322 addons                                                                        | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | addons-631322                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | -p addons-631322                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:09 UTC | 29 Jul 24 12:09 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-631322 ip                                                                            | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:10 UTC | 29 Jul 24 12:10 UTC |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:10 UTC | 29 Jul 24 12:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-631322 addons disable                                                                | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:10 UTC | 29 Jul 24 12:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-631322 addons                                                                        | addons-631322        | jenkins | v1.33.1 | 29 Jul 24 12:12 UTC | 29 Jul 24 12:12 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:03:38
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:03:38.785845  241491 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:03:38.786105  241491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:03:38.786114  241491 out.go:304] Setting ErrFile to fd 2...
	I0729 12:03:38.786118  241491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:03:38.786319  241491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:03:38.786923  241491 out.go:298] Setting JSON to false
	I0729 12:03:38.787733  241491 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6362,"bootTime":1722248257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:03:38.787789  241491 start.go:139] virtualization: kvm guest
	I0729 12:03:38.789862  241491 out.go:177] * [addons-631322] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:03:38.791110  241491 notify.go:220] Checking for updates...
	I0729 12:03:38.791119  241491 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:03:38.792482  241491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:03:38.793892  241491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:03:38.795349  241491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:03:38.796544  241491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:03:38.797869  241491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:03:38.799290  241491 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:03:38.830714  241491 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 12:03:38.832014  241491 start.go:297] selected driver: kvm2
	I0729 12:03:38.832025  241491 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:03:38.832039  241491 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:03:38.832680  241491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:03:38.832773  241491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:03:38.847108  241491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:03:38.847152  241491 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:03:38.847357  241491 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:03:38.847381  241491 cni.go:84] Creating CNI manager for ""
	I0729 12:03:38.847390  241491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:03:38.847402  241491 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:03:38.847448  241491 start.go:340] cluster config:
	{Name:addons-631322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-631322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:03:38.847531  241491 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:03:38.849258  241491 out.go:177] * Starting "addons-631322" primary control-plane node in "addons-631322" cluster
	I0729 12:03:38.850606  241491 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:03:38.850635  241491 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:03:38.850663  241491 cache.go:56] Caching tarball of preloaded images
	I0729 12:03:38.850736  241491 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:03:38.850746  241491 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:03:38.851030  241491 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/config.json ...
	I0729 12:03:38.851053  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/config.json: {Name:mk47b09464316e77ac954e90709ba511d6f1c023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:03:38.851174  241491 start.go:360] acquireMachinesLock for addons-631322: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:03:38.851215  241491 start.go:364] duration metric: took 29.949µs to acquireMachinesLock for "addons-631322"
	I0729 12:03:38.851231  241491 start.go:93] Provisioning new machine with config: &{Name:addons-631322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-631322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:03:38.851300  241491 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 12:03:38.852869  241491 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 12:03:38.852995  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:03:38.853029  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:03:38.867004  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38575
	I0729 12:03:38.867437  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:03:38.867992  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:03:38.868018  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:03:38.868422  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:03:38.868606  241491 main.go:141] libmachine: (addons-631322) Calling .GetMachineName
	I0729 12:03:38.868752  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:03:38.868899  241491 start.go:159] libmachine.API.Create for "addons-631322" (driver="kvm2")
	I0729 12:03:38.868926  241491 client.go:168] LocalClient.Create starting
	I0729 12:03:38.868959  241491 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem
	I0729 12:03:39.066691  241491 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem
	I0729 12:03:39.134077  241491 main.go:141] libmachine: Running pre-create checks...
	I0729 12:03:39.134101  241491 main.go:141] libmachine: (addons-631322) Calling .PreCreateCheck
	I0729 12:03:39.134642  241491 main.go:141] libmachine: (addons-631322) Calling .GetConfigRaw
	I0729 12:03:39.135136  241491 main.go:141] libmachine: Creating machine...
	I0729 12:03:39.135151  241491 main.go:141] libmachine: (addons-631322) Calling .Create
	I0729 12:03:39.135330  241491 main.go:141] libmachine: (addons-631322) Creating KVM machine...
	I0729 12:03:39.136507  241491 main.go:141] libmachine: (addons-631322) DBG | found existing default KVM network
	I0729 12:03:39.137314  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:39.137181  241513 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0729 12:03:39.137372  241491 main.go:141] libmachine: (addons-631322) DBG | created network xml: 
	I0729 12:03:39.137397  241491 main.go:141] libmachine: (addons-631322) DBG | <network>
	I0729 12:03:39.137410  241491 main.go:141] libmachine: (addons-631322) DBG |   <name>mk-addons-631322</name>
	I0729 12:03:39.137421  241491 main.go:141] libmachine: (addons-631322) DBG |   <dns enable='no'/>
	I0729 12:03:39.137430  241491 main.go:141] libmachine: (addons-631322) DBG |   
	I0729 12:03:39.137438  241491 main.go:141] libmachine: (addons-631322) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 12:03:39.137477  241491 main.go:141] libmachine: (addons-631322) DBG |     <dhcp>
	I0729 12:03:39.137497  241491 main.go:141] libmachine: (addons-631322) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 12:03:39.137504  241491 main.go:141] libmachine: (addons-631322) DBG |     </dhcp>
	I0729 12:03:39.137511  241491 main.go:141] libmachine: (addons-631322) DBG |   </ip>
	I0729 12:03:39.137595  241491 main.go:141] libmachine: (addons-631322) DBG |   
	I0729 12:03:39.137632  241491 main.go:141] libmachine: (addons-631322) DBG | </network>
	I0729 12:03:39.137650  241491 main.go:141] libmachine: (addons-631322) DBG | 
	I0729 12:03:39.142659  241491 main.go:141] libmachine: (addons-631322) DBG | trying to create private KVM network mk-addons-631322 192.168.39.0/24...
	I0729 12:03:39.205402  241491 main.go:141] libmachine: (addons-631322) DBG | private KVM network mk-addons-631322 192.168.39.0/24 created
	I0729 12:03:39.205453  241491 main.go:141] libmachine: (addons-631322) Setting up store path in /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322 ...
	I0729 12:03:39.205488  241491 main.go:141] libmachine: (addons-631322) Building disk image from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 12:03:39.205505  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:39.205381  241513 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:03:39.205541  241491 main.go:141] libmachine: (addons-631322) Downloading /home/jenkins/minikube-integration/19341-233093/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 12:03:39.482272  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:39.482137  241513 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa...
	I0729 12:03:39.587871  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:39.587680  241513 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/addons-631322.rawdisk...
	I0729 12:03:39.587912  241491 main.go:141] libmachine: (addons-631322) DBG | Writing magic tar header
	I0729 12:03:39.587927  241491 main.go:141] libmachine: (addons-631322) DBG | Writing SSH key tar header
	I0729 12:03:39.587939  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:39.587858  241513 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322 ...
	I0729 12:03:39.588053  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322
	I0729 12:03:39.588078  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines
	I0729 12:03:39.588087  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322 (perms=drwx------)
	I0729 12:03:39.588103  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines (perms=drwxr-xr-x)
	I0729 12:03:39.588114  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:03:39.588125  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube (perms=drwxr-xr-x)
	I0729 12:03:39.588135  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093 (perms=drwxrwxr-x)
	I0729 12:03:39.588144  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 12:03:39.588152  241491 main.go:141] libmachine: (addons-631322) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 12:03:39.588161  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093
	I0729 12:03:39.588166  241491 main.go:141] libmachine: (addons-631322) Creating domain...
	I0729 12:03:39.588181  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 12:03:39.588193  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home/jenkins
	I0729 12:03:39.588228  241491 main.go:141] libmachine: (addons-631322) DBG | Checking permissions on dir: /home
	I0729 12:03:39.588255  241491 main.go:141] libmachine: (addons-631322) DBG | Skipping /home - not owner
	I0729 12:03:39.589216  241491 main.go:141] libmachine: (addons-631322) define libvirt domain using xml: 
	I0729 12:03:39.589237  241491 main.go:141] libmachine: (addons-631322) <domain type='kvm'>
	I0729 12:03:39.589245  241491 main.go:141] libmachine: (addons-631322)   <name>addons-631322</name>
	I0729 12:03:39.589252  241491 main.go:141] libmachine: (addons-631322)   <memory unit='MiB'>4000</memory>
	I0729 12:03:39.589261  241491 main.go:141] libmachine: (addons-631322)   <vcpu>2</vcpu>
	I0729 12:03:39.589268  241491 main.go:141] libmachine: (addons-631322)   <features>
	I0729 12:03:39.589280  241491 main.go:141] libmachine: (addons-631322)     <acpi/>
	I0729 12:03:39.589287  241491 main.go:141] libmachine: (addons-631322)     <apic/>
	I0729 12:03:39.589295  241491 main.go:141] libmachine: (addons-631322)     <pae/>
	I0729 12:03:39.589305  241491 main.go:141] libmachine: (addons-631322)     
	I0729 12:03:39.589316  241491 main.go:141] libmachine: (addons-631322)   </features>
	I0729 12:03:39.589322  241491 main.go:141] libmachine: (addons-631322)   <cpu mode='host-passthrough'>
	I0729 12:03:39.589327  241491 main.go:141] libmachine: (addons-631322)   
	I0729 12:03:39.589340  241491 main.go:141] libmachine: (addons-631322)   </cpu>
	I0729 12:03:39.589345  241491 main.go:141] libmachine: (addons-631322)   <os>
	I0729 12:03:39.589350  241491 main.go:141] libmachine: (addons-631322)     <type>hvm</type>
	I0729 12:03:39.589355  241491 main.go:141] libmachine: (addons-631322)     <boot dev='cdrom'/>
	I0729 12:03:39.589359  241491 main.go:141] libmachine: (addons-631322)     <boot dev='hd'/>
	I0729 12:03:39.589365  241491 main.go:141] libmachine: (addons-631322)     <bootmenu enable='no'/>
	I0729 12:03:39.589368  241491 main.go:141] libmachine: (addons-631322)   </os>
	I0729 12:03:39.589373  241491 main.go:141] libmachine: (addons-631322)   <devices>
	I0729 12:03:39.589380  241491 main.go:141] libmachine: (addons-631322)     <disk type='file' device='cdrom'>
	I0729 12:03:39.589398  241491 main.go:141] libmachine: (addons-631322)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/boot2docker.iso'/>
	I0729 12:03:39.589409  241491 main.go:141] libmachine: (addons-631322)       <target dev='hdc' bus='scsi'/>
	I0729 12:03:39.589414  241491 main.go:141] libmachine: (addons-631322)       <readonly/>
	I0729 12:03:39.589421  241491 main.go:141] libmachine: (addons-631322)     </disk>
	I0729 12:03:39.589427  241491 main.go:141] libmachine: (addons-631322)     <disk type='file' device='disk'>
	I0729 12:03:39.589436  241491 main.go:141] libmachine: (addons-631322)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 12:03:39.589490  241491 main.go:141] libmachine: (addons-631322)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/addons-631322.rawdisk'/>
	I0729 12:03:39.589513  241491 main.go:141] libmachine: (addons-631322)       <target dev='hda' bus='virtio'/>
	I0729 12:03:39.589524  241491 main.go:141] libmachine: (addons-631322)     </disk>
	I0729 12:03:39.589535  241491 main.go:141] libmachine: (addons-631322)     <interface type='network'>
	I0729 12:03:39.589548  241491 main.go:141] libmachine: (addons-631322)       <source network='mk-addons-631322'/>
	I0729 12:03:39.589558  241491 main.go:141] libmachine: (addons-631322)       <model type='virtio'/>
	I0729 12:03:39.589571  241491 main.go:141] libmachine: (addons-631322)     </interface>
	I0729 12:03:39.589598  241491 main.go:141] libmachine: (addons-631322)     <interface type='network'>
	I0729 12:03:39.589609  241491 main.go:141] libmachine: (addons-631322)       <source network='default'/>
	I0729 12:03:39.589620  241491 main.go:141] libmachine: (addons-631322)       <model type='virtio'/>
	I0729 12:03:39.589630  241491 main.go:141] libmachine: (addons-631322)     </interface>
	I0729 12:03:39.589640  241491 main.go:141] libmachine: (addons-631322)     <serial type='pty'>
	I0729 12:03:39.589651  241491 main.go:141] libmachine: (addons-631322)       <target port='0'/>
	I0729 12:03:39.589662  241491 main.go:141] libmachine: (addons-631322)     </serial>
	I0729 12:03:39.589674  241491 main.go:141] libmachine: (addons-631322)     <console type='pty'>
	I0729 12:03:39.589686  241491 main.go:141] libmachine: (addons-631322)       <target type='serial' port='0'/>
	I0729 12:03:39.589702  241491 main.go:141] libmachine: (addons-631322)     </console>
	I0729 12:03:39.589715  241491 main.go:141] libmachine: (addons-631322)     <rng model='virtio'>
	I0729 12:03:39.589726  241491 main.go:141] libmachine: (addons-631322)       <backend model='random'>/dev/random</backend>
	I0729 12:03:39.589735  241491 main.go:141] libmachine: (addons-631322)     </rng>
	I0729 12:03:39.589744  241491 main.go:141] libmachine: (addons-631322)     
	I0729 12:03:39.589753  241491 main.go:141] libmachine: (addons-631322)     
	I0729 12:03:39.589770  241491 main.go:141] libmachine: (addons-631322)   </devices>
	I0729 12:03:39.589781  241491 main.go:141] libmachine: (addons-631322) </domain>
	I0729 12:03:39.589801  241491 main.go:141] libmachine: (addons-631322) 
	I0729 12:03:39.595564  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:39:96:56 in network default
	I0729 12:03:39.596138  241491 main.go:141] libmachine: (addons-631322) Ensuring networks are active...
	I0729 12:03:39.596166  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:39.596676  241491 main.go:141] libmachine: (addons-631322) Ensuring network default is active
	I0729 12:03:39.596928  241491 main.go:141] libmachine: (addons-631322) Ensuring network mk-addons-631322 is active
	I0729 12:03:39.597339  241491 main.go:141] libmachine: (addons-631322) Getting domain xml...
	I0729 12:03:39.598062  241491 main.go:141] libmachine: (addons-631322) Creating domain...
	I0729 12:03:40.974631  241491 main.go:141] libmachine: (addons-631322) Waiting to get IP...
	I0729 12:03:40.975504  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:40.975882  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:40.975912  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:40.975871  241513 retry.go:31] will retry after 221.1026ms: waiting for machine to come up
	I0729 12:03:41.198470  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:41.198967  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:41.199001  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:41.198913  241513 retry.go:31] will retry after 390.326394ms: waiting for machine to come up
	I0729 12:03:41.590590  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:41.590998  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:41.591022  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:41.590969  241513 retry.go:31] will retry after 432.958907ms: waiting for machine to come up
	I0729 12:03:42.025602  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:42.026069  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:42.026099  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:42.026014  241513 retry.go:31] will retry after 601.724783ms: waiting for machine to come up
	I0729 12:03:42.629733  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:42.630146  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:42.630176  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:42.630084  241513 retry.go:31] will retry after 614.697445ms: waiting for machine to come up
	I0729 12:03:43.246453  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:43.246884  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:43.246913  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:43.246831  241513 retry.go:31] will retry after 675.840233ms: waiting for machine to come up
	I0729 12:03:43.924252  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:43.924621  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:43.924648  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:43.924583  241513 retry.go:31] will retry after 1.129870242s: waiting for machine to come up
	I0729 12:03:45.055815  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:45.056264  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:45.056290  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:45.056222  241513 retry.go:31] will retry after 1.407914366s: waiting for machine to come up
	I0729 12:03:46.465921  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:46.466270  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:46.466296  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:46.466222  241513 retry.go:31] will retry after 1.85953515s: waiting for machine to come up
	I0729 12:03:48.327095  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:48.327538  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:48.327564  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:48.327484  241513 retry.go:31] will retry after 1.811774102s: waiting for machine to come up
	I0729 12:03:50.140517  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:50.140992  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:50.141027  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:50.140947  241513 retry.go:31] will retry after 2.1623841s: waiting for machine to come up
	I0729 12:03:52.306212  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:52.306569  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:52.306594  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:52.306506  241513 retry.go:31] will retry after 2.203731396s: waiting for machine to come up
	I0729 12:03:54.511322  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:54.511719  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:54.511746  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:54.511708  241513 retry.go:31] will retry after 3.089723759s: waiting for machine to come up
	I0729 12:03:57.606029  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:03:57.606410  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find current IP address of domain addons-631322 in network mk-addons-631322
	I0729 12:03:57.606429  241491 main.go:141] libmachine: (addons-631322) DBG | I0729 12:03:57.606387  241513 retry.go:31] will retry after 5.382838108s: waiting for machine to come up
	I0729 12:04:02.990939  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:02.991324  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has current primary IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:02.991354  241491 main.go:141] libmachine: (addons-631322) Found IP for machine: 192.168.39.55
	I0729 12:04:02.991368  241491 main.go:141] libmachine: (addons-631322) Reserving static IP address...
	I0729 12:04:02.991651  241491 main.go:141] libmachine: (addons-631322) DBG | unable to find host DHCP lease matching {name: "addons-631322", mac: "52:54:00:47:2e:02", ip: "192.168.39.55"} in network mk-addons-631322
	I0729 12:04:03.062182  241491 main.go:141] libmachine: (addons-631322) DBG | Getting to WaitForSSH function...
	I0729 12:04:03.062215  241491 main.go:141] libmachine: (addons-631322) Reserved static IP address: 192.168.39.55
	I0729 12:04:03.062228  241491 main.go:141] libmachine: (addons-631322) Waiting for SSH to be available...
	I0729 12:04:03.064609  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.065140  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.065182  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.065292  241491 main.go:141] libmachine: (addons-631322) DBG | Using SSH client type: external
	I0729 12:04:03.065326  241491 main.go:141] libmachine: (addons-631322) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa (-rw-------)
	I0729 12:04:03.065349  241491 main.go:141] libmachine: (addons-631322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 12:04:03.065372  241491 main.go:141] libmachine: (addons-631322) DBG | About to run SSH command:
	I0729 12:04:03.065383  241491 main.go:141] libmachine: (addons-631322) DBG | exit 0
	I0729 12:04:03.189041  241491 main.go:141] libmachine: (addons-631322) DBG | SSH cmd err, output: <nil>: 
	I0729 12:04:03.189375  241491 main.go:141] libmachine: (addons-631322) KVM machine creation complete!
	I0729 12:04:03.189606  241491 main.go:141] libmachine: (addons-631322) Calling .GetConfigRaw
	I0729 12:04:03.190160  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:03.190352  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:03.190497  241491 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 12:04:03.190511  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:03.191603  241491 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 12:04:03.191617  241491 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 12:04:03.191625  241491 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 12:04:03.191631  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.193453  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.193763  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.193784  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.193949  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.194122  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.194283  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.194414  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.194575  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:03.194767  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:03.194777  241491 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 12:04:03.295899  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:04:03.295932  241491 main.go:141] libmachine: Detecting the provisioner...
	I0729 12:04:03.295940  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.298340  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.298648  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.298679  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.298826  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.299010  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.299197  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.299334  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.299516  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:03.299672  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:03.299682  241491 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 12:04:03.405668  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 12:04:03.405789  241491 main.go:141] libmachine: found compatible host: buildroot
	I0729 12:04:03.405804  241491 main.go:141] libmachine: Provisioning with buildroot...
	I0729 12:04:03.405817  241491 main.go:141] libmachine: (addons-631322) Calling .GetMachineName
	I0729 12:04:03.406088  241491 buildroot.go:166] provisioning hostname "addons-631322"
	I0729 12:04:03.406113  241491 main.go:141] libmachine: (addons-631322) Calling .GetMachineName
	I0729 12:04:03.406328  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.408863  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.409159  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.409202  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.409357  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.409604  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.409772  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.410043  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.410231  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:03.410405  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:03.410417  241491 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-631322 && echo "addons-631322" | sudo tee /etc/hostname
	I0729 12:04:03.527902  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-631322
	
	I0729 12:04:03.527949  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.530512  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.530859  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.530887  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.531036  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.531235  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.531399  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.531522  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.531655  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:03.531846  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:03.531869  241491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-631322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-631322/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-631322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:04:03.646014  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:04:03.646054  241491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:04:03.646074  241491 buildroot.go:174] setting up certificates
	I0729 12:04:03.646086  241491 provision.go:84] configureAuth start
	I0729 12:04:03.646095  241491 main.go:141] libmachine: (addons-631322) Calling .GetMachineName
	I0729 12:04:03.646410  241491 main.go:141] libmachine: (addons-631322) Calling .GetIP
	I0729 12:04:03.648823  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.649195  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.649214  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.649407  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.651502  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.651815  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.651844  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.651974  241491 provision.go:143] copyHostCerts
	I0729 12:04:03.652071  241491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:04:03.652196  241491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:04:03.652264  241491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:04:03.652323  241491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.addons-631322 san=[127.0.0.1 192.168.39.55 addons-631322 localhost minikube]
	I0729 12:04:03.824070  241491 provision.go:177] copyRemoteCerts
	I0729 12:04:03.824140  241491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:04:03.824164  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.826738  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.827131  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.827165  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.827307  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.827502  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.827665  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.827797  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:03.910795  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 12:04:03.933959  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:04:03.962996  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:04:03.985364  241491 provision.go:87] duration metric: took 339.265549ms to configureAuth
	I0729 12:04:03.985391  241491 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:04:03.985605  241491 config.go:182] Loaded profile config "addons-631322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:04:03.985716  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:03.988212  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.988540  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:03.988575  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:03.988739  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:03.988961  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.989121  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:03.989246  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:03.989377  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:03.989541  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:03.989555  241491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:04:04.252863  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:04:04.252899  241491 main.go:141] libmachine: Checking connection to Docker...
	I0729 12:04:04.252907  241491 main.go:141] libmachine: (addons-631322) Calling .GetURL
	I0729 12:04:04.254059  241491 main.go:141] libmachine: (addons-631322) DBG | Using libvirt version 6000000
	I0729 12:04:04.255919  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.256329  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.256360  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.256470  241491 main.go:141] libmachine: Docker is up and running!
	I0729 12:04:04.256485  241491 main.go:141] libmachine: Reticulating splines...
	I0729 12:04:04.256494  241491 client.go:171] duration metric: took 25.387558324s to LocalClient.Create
	I0729 12:04:04.256514  241491 start.go:167] duration metric: took 25.387616353s to libmachine.API.Create "addons-631322"
	I0729 12:04:04.256523  241491 start.go:293] postStartSetup for "addons-631322" (driver="kvm2")
	I0729 12:04:04.256541  241491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:04:04.256568  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:04.256851  241491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:04:04.256881  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:04.258846  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.259171  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.259203  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.259320  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:04.259511  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:04.259664  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:04.259815  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:04.342865  241491 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:04:04.347630  241491 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:04:04.347654  241491 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:04:04.347728  241491 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:04:04.347756  241491 start.go:296] duration metric: took 91.220597ms for postStartSetup
	I0729 12:04:04.347805  241491 main.go:141] libmachine: (addons-631322) Calling .GetConfigRaw
	I0729 12:04:04.348373  241491 main.go:141] libmachine: (addons-631322) Calling .GetIP
	I0729 12:04:04.350735  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.351051  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.351078  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.351303  241491 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/config.json ...
	I0729 12:04:04.351463  241491 start.go:128] duration metric: took 25.500152223s to createHost
	I0729 12:04:04.351484  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:04.353661  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.353915  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.353938  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.354073  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:04.354272  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:04.354441  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:04.354618  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:04.354785  241491 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:04.354984  241491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0729 12:04:04.354997  241491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:04:04.457234  241491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722254644.434549096
	
	I0729 12:04:04.457261  241491 fix.go:216] guest clock: 1722254644.434549096
	I0729 12:04:04.457274  241491 fix.go:229] Guest: 2024-07-29 12:04:04.434549096 +0000 UTC Remote: 2024-07-29 12:04:04.351473847 +0000 UTC m=+25.598919584 (delta=83.075249ms)
	I0729 12:04:04.457310  241491 fix.go:200] guest clock delta is within tolerance: 83.075249ms
	I0729 12:04:04.457316  241491 start.go:83] releasing machines lock for "addons-631322", held for 25.606092699s
	I0729 12:04:04.457346  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:04.457622  241491 main.go:141] libmachine: (addons-631322) Calling .GetIP
	I0729 12:04:04.459908  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.460232  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.460259  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.460379  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:04.460905  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:04.461129  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:04.461230  241491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:04:04.461281  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:04.461349  241491 ssh_runner.go:195] Run: cat /version.json
	I0729 12:04:04.461376  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:04.463461  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.463688  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.463782  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.463813  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.463969  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:04.463968  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:04.464001  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:04.464127  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:04.464188  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:04.464285  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:04.464348  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:04.464422  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:04.464474  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:04.464594  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:04.541639  241491 ssh_runner.go:195] Run: systemctl --version
	I0729 12:04:04.565723  241491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:04:04.725068  241491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:04:04.730930  241491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:04:04.731003  241491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:04:04.747140  241491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 12:04:04.747163  241491 start.go:495] detecting cgroup driver to use...
	I0729 12:04:04.747233  241491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:04:04.762268  241491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:04:04.775558  241491 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:04:04.775618  241491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:04:04.788740  241491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:04:04.801864  241491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:04:04.908099  241491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:04:05.070388  241491 docker.go:233] disabling docker service ...
	I0729 12:04:05.070472  241491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:04:05.084857  241491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:04:05.097567  241491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:04:05.218183  241491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:04:05.341114  241491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:04:05.355127  241491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:04:05.372766  241491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:04:05.372844  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.383119  241491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:04:05.383176  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.393788  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.404283  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.414624  241491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:04:05.425117  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.435228  241491 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.451683  241491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:04:05.461864  241491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:04:05.470791  241491 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 12:04:05.470839  241491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 12:04:05.483572  241491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:04:05.492603  241491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:04:05.610229  241491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:04:05.738899  241491 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:04:05.738996  241491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:04:05.743470  241491 start.go:563] Will wait 60s for crictl version
	I0729 12:04:05.743519  241491 ssh_runner.go:195] Run: which crictl
	I0729 12:04:05.746979  241491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:04:05.782099  241491 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:04:05.782203  241491 ssh_runner.go:195] Run: crio --version
	I0729 12:04:05.809869  241491 ssh_runner.go:195] Run: crio --version
	I0729 12:04:05.839164  241491 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:04:05.840545  241491 main.go:141] libmachine: (addons-631322) Calling .GetIP
	I0729 12:04:05.843203  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:05.843542  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:05.843571  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:05.843779  241491 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:04:05.847712  241491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:04:05.859477  241491 kubeadm.go:883] updating cluster {Name:addons-631322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-631322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:04:05.859598  241491 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:04:05.859640  241491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:04:05.891120  241491 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 12:04:05.891188  241491 ssh_runner.go:195] Run: which lz4
	I0729 12:04:05.895074  241491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 12:04:05.899082  241491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 12:04:05.899109  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 12:04:07.244805  241491 crio.go:462] duration metric: took 1.34975623s to copy over tarball
	I0729 12:04:07.244874  241491 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 12:04:09.449245  241491 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204347368s)
	I0729 12:04:09.449270  241491 crio.go:469] duration metric: took 2.204436281s to extract the tarball
	I0729 12:04:09.449277  241491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 12:04:09.487216  241491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:04:09.527467  241491 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:04:09.527492  241491 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:04:09.527501  241491 kubeadm.go:934] updating node { 192.168.39.55 8443 v1.30.3 crio true true} ...
	I0729 12:04:09.527608  241491 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-631322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-631322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:04:09.527671  241491 ssh_runner.go:195] Run: crio config
	I0729 12:04:09.572260  241491 cni.go:84] Creating CNI manager for ""
	I0729 12:04:09.572281  241491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:04:09.572290  241491 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:04:09.572313  241491 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.55 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-631322 NodeName:addons-631322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:04:09.572445  241491 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-631322"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:04:09.572509  241491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:04:09.581991  241491 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:04:09.582047  241491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 12:04:09.590703  241491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 12:04:09.607019  241491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:04:09.622406  241491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0729 12:04:09.638046  241491 ssh_runner.go:195] Run: grep 192.168.39.55	control-plane.minikube.internal$ /etc/hosts
	I0729 12:04:09.641777  241491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:04:09.652863  241491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:04:09.770311  241491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:04:09.787820  241491 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322 for IP: 192.168.39.55
	I0729 12:04:09.787842  241491 certs.go:194] generating shared ca certs ...
	I0729 12:04:09.787859  241491 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:09.787988  241491 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:04:09.853058  241491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt ...
	I0729 12:04:09.853088  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt: {Name:mke27a0eb0127502de013bd52c09e0c1c581ed26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:09.853247  241491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key ...
	I0729 12:04:09.853257  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key: {Name:mk3457a6f2487a1a6f1af779557867a2e01c1eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:09.853328  241491 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:04:09.940681  241491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt ...
	I0729 12:04:09.940709  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt: {Name:mkbc859dc9196fd104e55851409846d48b5b049b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:09.940884  241491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key ...
	I0729 12:04:09.940895  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key: {Name:mk7a5c8af9586bdc26928dc16bf94e44d413be49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:09.940963  241491 certs.go:256] generating profile certs ...
	I0729 12:04:09.941016  241491 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.key
	I0729 12:04:09.941029  241491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt with IP's: []
	I0729 12:04:10.056586  241491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt ...
	I0729 12:04:10.056616  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: {Name:mk70b0764140f92cd0a8ee2100ee1cfaeceaab30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.056782  241491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.key ...
	I0729 12:04:10.056805  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.key: {Name:mk03a9aeff9bb7e7dbf216d4adf4ceb122674215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.056878  241491 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key.7242e505
	I0729 12:04:10.056896  241491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt.7242e505 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.55]
	I0729 12:04:10.215699  241491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt.7242e505 ...
	I0729 12:04:10.215732  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt.7242e505: {Name:mk83c38842f5bab27670a51e22fe8f97c2e52472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.215922  241491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key.7242e505 ...
	I0729 12:04:10.215938  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key.7242e505: {Name:mk787c1efcd4cdbe1f1e99afc46e8fdfdb1326dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.216028  241491 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt.7242e505 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt
	I0729 12:04:10.216111  241491 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key.7242e505 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key
	I0729 12:04:10.216155  241491 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.key
	I0729 12:04:10.216172  241491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.crt with IP's: []
	I0729 12:04:10.478758  241491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.crt ...
	I0729 12:04:10.478794  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.crt: {Name:mk04af66d23a52e124de575b89f10821e6f919ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.478952  241491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.key ...
	I0729 12:04:10.478967  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.key: {Name:mk5c9ea11c36bde78f317789122b8285064035f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:10.479128  241491 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:04:10.479162  241491 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:04:10.479187  241491 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:04:10.479209  241491 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:04:10.479830  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:04:10.504907  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:04:10.536699  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:04:10.568519  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:04:10.591439  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 12:04:10.613685  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:04:10.636137  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:04:10.658469  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:04:10.685140  241491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:04:10.708879  241491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:04:10.725295  241491 ssh_runner.go:195] Run: openssl version
	I0729 12:04:10.730896  241491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:04:10.741235  241491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:04:10.745337  241491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:04:10.745388  241491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:04:10.750981  241491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:04:10.761199  241491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:04:10.765402  241491 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 12:04:10.765457  241491 kubeadm.go:392] StartCluster: {Name:addons-631322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-631322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:04:10.765580  241491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:04:10.765634  241491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:04:10.799344  241491 cri.go:89] found id: ""
	I0729 12:04:10.799413  241491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 12:04:10.809520  241491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 12:04:10.818994  241491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 12:04:10.828297  241491 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 12:04:10.828319  241491 kubeadm.go:157] found existing configuration files:
	
	I0729 12:04:10.828357  241491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 12:04:10.836896  241491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 12:04:10.836951  241491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 12:04:10.846501  241491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 12:04:10.855322  241491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 12:04:10.855366  241491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 12:04:10.864439  241491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 12:04:10.872910  241491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 12:04:10.872958  241491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 12:04:10.882003  241491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 12:04:10.890964  241491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 12:04:10.891025  241491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 12:04:10.900184  241491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 12:04:10.957294  241491 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 12:04:10.957383  241491 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 12:04:11.101216  241491 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 12:04:11.101359  241491 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 12:04:11.101501  241491 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 12:04:11.299180  241491 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 12:04:11.381248  241491 out.go:204]   - Generating certificates and keys ...
	I0729 12:04:11.381355  241491 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 12:04:11.381449  241491 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 12:04:11.615376  241491 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 12:04:11.804184  241491 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 12:04:12.031773  241491 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 12:04:12.363667  241491 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 12:04:12.475072  241491 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 12:04:12.475348  241491 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-631322 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I0729 12:04:12.551828  241491 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 12:04:12.552079  241491 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-631322 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I0729 12:04:12.598982  241491 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 12:04:13.069678  241491 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 12:04:13.305223  241491 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 12:04:13.305435  241491 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 12:04:13.506561  241491 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 12:04:13.719055  241491 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 12:04:14.130043  241491 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 12:04:14.218646  241491 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 12:04:14.662079  241491 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 12:04:14.662601  241491 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 12:04:14.664855  241491 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 12:04:14.666904  241491 out.go:204]   - Booting up control plane ...
	I0729 12:04:14.667030  241491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 12:04:14.667156  241491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 12:04:14.667260  241491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 12:04:14.682528  241491 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 12:04:14.685057  241491 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 12:04:14.685112  241491 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 12:04:14.813844  241491 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 12:04:14.813963  241491 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 12:04:15.315097  241491 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.872753ms
	I0729 12:04:15.315264  241491 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 12:04:20.318390  241491 kubeadm.go:310] [api-check] The API server is healthy after 5.00371171s
	I0729 12:04:20.331615  241491 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 12:04:20.342822  241491 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 12:04:20.367194  241491 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 12:04:20.367387  241491 kubeadm.go:310] [mark-control-plane] Marking the node addons-631322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 12:04:20.378333  241491 kubeadm.go:310] [bootstrap-token] Using token: x2rsx0.x2zgacijylh4bb28
	I0729 12:04:20.379755  241491 out.go:204]   - Configuring RBAC rules ...
	I0729 12:04:20.379893  241491 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 12:04:20.387631  241491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 12:04:20.393747  241491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 12:04:20.396968  241491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 12:04:20.400076  241491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 12:04:20.403313  241491 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 12:04:20.726776  241491 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 12:04:21.169414  241491 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 12:04:21.724226  241491 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 12:04:21.724256  241491 kubeadm.go:310] 
	I0729 12:04:21.724322  241491 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 12:04:21.724345  241491 kubeadm.go:310] 
	I0729 12:04:21.724426  241491 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 12:04:21.724444  241491 kubeadm.go:310] 
	I0729 12:04:21.724492  241491 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 12:04:21.724577  241491 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 12:04:21.724660  241491 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 12:04:21.724668  241491 kubeadm.go:310] 
	I0729 12:04:21.724740  241491 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 12:04:21.724757  241491 kubeadm.go:310] 
	I0729 12:04:21.724863  241491 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 12:04:21.724872  241491 kubeadm.go:310] 
	I0729 12:04:21.724914  241491 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 12:04:21.725017  241491 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 12:04:21.725107  241491 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 12:04:21.725127  241491 kubeadm.go:310] 
	I0729 12:04:21.725252  241491 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 12:04:21.725362  241491 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 12:04:21.725373  241491 kubeadm.go:310] 
	I0729 12:04:21.725478  241491 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x2rsx0.x2zgacijylh4bb28 \
	I0729 12:04:21.725643  241491 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 \
	I0729 12:04:21.725675  241491 kubeadm.go:310] 	--control-plane 
	I0729 12:04:21.725684  241491 kubeadm.go:310] 
	I0729 12:04:21.725799  241491 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 12:04:21.725809  241491 kubeadm.go:310] 
	I0729 12:04:21.725932  241491 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x2rsx0.x2zgacijylh4bb28 \
	I0729 12:04:21.726071  241491 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 
	I0729 12:04:21.726213  241491 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 12:04:21.726253  241491 cni.go:84] Creating CNI manager for ""
	I0729 12:04:21.726269  241491 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:04:21.728007  241491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 12:04:21.729263  241491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 12:04:21.739807  241491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 12:04:21.757304  241491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 12:04:21.757392  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:21.757403  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-631322 minikube.k8s.io/updated_at=2024_07_29T12_04_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=addons-631322 minikube.k8s.io/primary=true
	I0729 12:04:21.788741  241491 ops.go:34] apiserver oom_adj: -16
	I0729 12:04:21.887541  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:22.387817  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:22.887639  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:23.388478  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:23.887659  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:24.388598  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:24.887881  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:25.387920  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:25.887994  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:26.388506  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:26.888134  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:27.387644  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:27.887692  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:28.388241  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:28.888114  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:29.387934  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:29.887701  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:30.387663  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:30.888542  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:31.388585  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:31.887801  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:32.387988  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:32.888588  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:33.388072  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:33.887865  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:34.387562  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:34.888132  241491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 12:04:34.967324  241491 kubeadm.go:1113] duration metric: took 13.209992637s to wait for elevateKubeSystemPrivileges
	I0729 12:04:34.967364  241491 kubeadm.go:394] duration metric: took 24.201913797s to StartCluster
	I0729 12:04:34.967389  241491 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:34.967521  241491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:04:34.967960  241491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:04:34.968182  241491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 12:04:34.968217  241491 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:04:34.968284  241491 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 12:04:34.968421  241491 addons.go:69] Setting yakd=true in profile "addons-631322"
	I0729 12:04:34.968425  241491 addons.go:69] Setting helm-tiller=true in profile "addons-631322"
	I0729 12:04:34.968445  241491 addons.go:69] Setting inspektor-gadget=true in profile "addons-631322"
	I0729 12:04:34.968462  241491 addons.go:234] Setting addon yakd=true in "addons-631322"
	I0729 12:04:34.968466  241491 addons.go:234] Setting addon inspektor-gadget=true in "addons-631322"
	I0729 12:04:34.968469  241491 addons.go:234] Setting addon helm-tiller=true in "addons-631322"
	I0729 12:04:34.968460  241491 addons.go:69] Setting ingress-dns=true in profile "addons-631322"
	I0729 12:04:34.968479  241491 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-631322"
	I0729 12:04:34.968498  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968505  241491 addons.go:69] Setting registry=true in profile "addons-631322"
	I0729 12:04:34.968508  241491 config.go:182] Loaded profile config "addons-631322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:04:34.968521  241491 addons.go:234] Setting addon registry=true in "addons-631322"
	I0729 12:04:34.968521  241491 addons.go:69] Setting metrics-server=true in profile "addons-631322"
	I0729 12:04:34.968524  241491 addons.go:69] Setting default-storageclass=true in profile "addons-631322"
	I0729 12:04:34.968538  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968542  241491 addons.go:234] Setting addon metrics-server=true in "addons-631322"
	I0729 12:04:34.968548  241491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-631322"
	I0729 12:04:34.968510  241491 addons.go:69] Setting ingress=true in profile "addons-631322"
	I0729 12:04:34.968584  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968598  241491 addons.go:234] Setting addon ingress=true in "addons-631322"
	I0729 12:04:34.968610  241491 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-631322"
	I0729 12:04:34.968633  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968642  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968498  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.968433  241491 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-631322"
	I0729 12:04:34.969031  241491 addons.go:69] Setting gcp-auth=true in profile "addons-631322"
	I0729 12:04:34.969042  241491 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-631322"
	I0729 12:04:34.969045  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969054  241491 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-631322"
	I0729 12:04:34.969055  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969063  241491 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-631322"
	I0729 12:04:34.969067  241491 addons.go:69] Setting storage-provisioner=true in profile "addons-631322"
	I0729 12:04:34.969020  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969077  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969084  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969087  241491 addons.go:234] Setting addon storage-provisioner=true in "addons-631322"
	I0729 12:04:34.969100  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969109  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969175  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969210  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969228  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969252  241491 addons.go:69] Setting volcano=true in profile "addons-631322"
	I0729 12:04:34.969263  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969275  241491 addons.go:234] Setting addon volcano=true in "addons-631322"
	I0729 12:04:34.968516  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969291  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969315  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969027  241491 addons.go:69] Setting cloud-spanner=true in profile "addons-631322"
	I0729 12:04:34.969376  241491 addons.go:69] Setting volumesnapshots=true in profile "addons-631322"
	I0729 12:04:34.969056  241491 mustload.go:65] Loading cluster: addons-631322
	I0729 12:04:34.969387  241491 addons.go:234] Setting addon cloud-spanner=true in "addons-631322"
	I0729 12:04:34.969399  241491 addons.go:234] Setting addon volumesnapshots=true in "addons-631322"
	I0729 12:04:34.969280  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.968502  241491 addons.go:234] Setting addon ingress-dns=true in "addons-631322"
	I0729 12:04:34.969584  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969600  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969610  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969642  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969648  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969699  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969721  241491 config.go:182] Loaded profile config "addons-631322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:04:34.969773  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.969887  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969905  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969957  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969964  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.969975  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.969986  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.970020  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.970029  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.970062  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.970071  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.970080  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.970088  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:34.970091  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.970125  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:34.970154  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:34.970598  241491 out.go:177] * Verifying Kubernetes components...
	I0729 12:04:34.972138  241491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:04:34.988249  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0729 12:04:34.988935  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39823
	I0729 12:04:34.989135  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0729 12:04:35.000524  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.000582  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.000851  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.001274  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.001491  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.001516  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.001586  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.002103  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.002127  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.002263  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.002282  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.002336  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.002685  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.003142  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.003186  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.006604  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.006971  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.007004  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.007197  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.007249  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.015628  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41805
	I0729 12:04:35.016242  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.016865  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.016884  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.017290  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.017916  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.017957  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.019068  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44121
	I0729 12:04:35.019605  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.020165  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.020182  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.020577  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.020848  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.024722  241491 addons.go:234] Setting addon default-storageclass=true in "addons-631322"
	I0729 12:04:35.024765  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:35.025335  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.025375  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.025991  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0729 12:04:35.026447  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.026947  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.026965  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.027295  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.027875  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.027910  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.032272  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0729 12:04:35.032865  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.033321  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.033337  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.033754  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.034329  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.034369  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.035032  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46049
	I0729 12:04:35.035479  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.035992  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.036011  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.036350  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.036883  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.036922  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.043035  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
	I0729 12:04:35.043284  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0729 12:04:35.043847  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.043963  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I0729 12:04:35.044600  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.044621  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.045050  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.045261  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.045717  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.045750  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.046407  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.046426  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.047096  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.047707  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.047727  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.047960  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39543
	I0729 12:04:35.048149  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0729 12:04:35.048295  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.048868  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.048903  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.048979  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0729 12:04:35.049109  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.049149  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.049404  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.049545  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.049558  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.049629  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.049694  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.050010  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.050582  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.050621  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.050902  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.050916  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.051046  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.051056  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.051416  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.051482  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.051972  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.051997  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.052235  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.052888  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.052932  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.054049  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0729 12:04:35.054395  241491 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 12:04:35.054505  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.054586  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I0729 12:04:35.054958  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.055188  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.055206  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.055517  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.055543  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.055609  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.055697  241491 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 12:04:35.055720  241491 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 12:04:35.055740  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.055958  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.056129  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.056175  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.056207  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.058878  241491 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-631322"
	I0729 12:04:35.058924  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:35.059276  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.059305  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.060368  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.060814  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.060838  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.061127  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.061317  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.061516  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.061637  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.062853  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0729 12:04:35.063335  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.063858  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.063876  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.064260  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.064467  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.066875  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34689
	I0729 12:04:35.067393  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.067943  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.067962  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.068354  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.068611  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.069797  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:35.070183  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.070218  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.071631  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.073674  241491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 12:04:35.075044  241491 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 12:04:35.075065  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 12:04:35.075084  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.078248  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.078617  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.078638  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.078871  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.079111  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.079321  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.079476  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.080508  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41477
	I0729 12:04:35.080685  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0729 12:04:35.081213  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.081816  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.081835  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.082242  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.082499  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.083338  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.083995  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.084014  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.084444  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.084715  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.085721  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.086857  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.087894  241491 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 12:04:35.088730  241491 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 12:04:35.090420  241491 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 12:04:35.090442  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 12:04:35.090462  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.090523  241491 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 12:04:35.091781  241491 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 12:04:35.091806  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 12:04:35.091827  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.094111  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38339
	I0729 12:04:35.094965  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.095566  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.095700  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.095724  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.096254  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.096315  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.096330  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.096498  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.096758  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.096986  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.097177  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.097482  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.097761  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.097807  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42175
	I0729 12:04:35.097832  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.097847  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.097963  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.100242  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I0729 12:04:35.100287  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 12:04:35.100373  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.100417  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.100374  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I0729 12:04:35.100478  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0729 12:04:35.101174  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.101254  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.101253  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.101271  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.101330  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.101508  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.101789  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.102191  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.102255  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0729 12:04:35.102342  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.102356  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.102451  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.102626  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.102640  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.102715  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.102750  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I0729 12:04:35.102780  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.103212  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.103216  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.103225  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.103268  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.103270  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.103295  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.103308  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.103469  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.103665  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.103716  241491 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 12:04:35.103795  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:35.103823  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:35.103861  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.104000  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.104011  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.104359  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.104544  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.104716  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I0729 12:04:35.104952  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.104966  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.105066  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.105093  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.105387  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.105509  241491 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 12:04:35.105527  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 12:04:35.105544  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.105636  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.105885  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.106290  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.106531  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.106597  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.106925  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.106944  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.107000  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.107373  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.107617  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.108556  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.108662  241491 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 12:04:35.108770  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 12:04:35.108813  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.110605  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.111162  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.111196  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.111373  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.111566  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.111687  241491 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 12:04:35.111744  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.111814  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.111839  241491 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0729 12:04:35.111861  241491 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 12:04:35.112456  241491 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 12:04:35.112475  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.112094  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.113515  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 12:04:35.113539  241491 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 12:04:35.113872  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 12:04:35.113891  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.114197  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 12:04:35.114998  241491 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 12:04:35.115318  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0729 12:04:35.115746  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 12:04:35.115764  241491 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 12:04:35.115782  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.116505  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.116563  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 12:04:35.116980  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.117008  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.117271  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.117756  241491 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 12:04:35.117819  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.117837  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.118342  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.118903  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.118934  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.118937  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 12:04:35.118968  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I0729 12:04:35.119251  241491 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 12:04:35.119274  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 12:04:35.119292  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.119324  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.119250  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.119259  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.119519  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.119658  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.119939  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.119962  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.119951  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.120056  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.120304  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.120351  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.120559  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.120564  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.120826  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.120852  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.121157  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.121490  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.121568  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 12:04:35.122223  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.122823  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.122848  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.122867  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.122987  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.122998  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.123069  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.123343  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.123636  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.123837  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.123876  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.123947  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 12:04:35.124143  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.124699  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.124998  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:35.125015  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:35.125274  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:35.125291  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:35.125306  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:35.125318  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:35.125318  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33113
	I0729 12:04:35.125747  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:35.125763  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:35.125779  241491 main.go:141] libmachine: () Calling .GetVersion
	W0729 12:04:35.125828  241491 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 12:04:35.126129  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.126536  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 12:04:35.127323  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.127341  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.127651  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.127812  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.127867  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0729 12:04:35.128067  241491 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 12:04:35.128777  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.129225  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.129244  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.129368  241491 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 12:04:35.129444  241491 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 12:04:35.129461  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 12:04:35.129481  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.129946  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.130700  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.130700  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.130973  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 12:04:35.130987  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 12:04:35.131003  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.132392  241491 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 12:04:35.133197  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.133384  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0729 12:04:35.133610  241491 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 12:04:35.133631  241491 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 12:04:35.133647  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.133743  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:35.134159  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:35.134180  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:35.134548  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:35.134713  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:35.134791  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.135026  241491 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 12:04:35.135150  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.135169  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.135181  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.135400  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.135592  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.135728  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.135915  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.136191  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.136207  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.136479  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.136652  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.136843  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.137011  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:35.137032  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.137231  241491 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 12:04:35.137248  241491 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 12:04:35.137265  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.137389  241491 out.go:177]   - Using image docker.io/busybox:stable
	I0729 12:04:35.137964  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.138291  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.138307  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.138477  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.138620  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.138732  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.138860  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.139205  241491 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 12:04:35.139221  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 12:04:35.139236  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:35.140355  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.140898  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.140919  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.141109  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.141284  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.141477  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.141660  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:35.142474  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.142901  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:35.142913  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:35.143096  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:35.143247  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:35.143362  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:35.143498  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	W0729 12:04:35.165779  241491 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36328->192.168.39.55:22: read: connection reset by peer
	I0729 12:04:35.165820  241491 retry.go:31] will retry after 350.545163ms: ssh: handshake failed: read tcp 192.168.39.1:36328->192.168.39.55:22: read: connection reset by peer
	I0729 12:04:35.433680  241491 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 12:04:35.433708  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 12:04:35.447859  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 12:04:35.457905  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 12:04:35.486330  241491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:04:35.486395  241491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 12:04:35.515313  241491 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 12:04:35.515343  241491 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 12:04:35.519239  241491 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 12:04:35.519266  241491 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 12:04:35.569398  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 12:04:35.612308  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 12:04:35.624135  241491 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 12:04:35.624160  241491 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 12:04:35.627666  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 12:04:35.634551  241491 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 12:04:35.634571  241491 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 12:04:35.645654  241491 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 12:04:35.645672  241491 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 12:04:35.657007  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 12:04:35.657027  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 12:04:35.661735  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 12:04:35.710852  241491 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 12:04:35.710875  241491 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 12:04:35.735471  241491 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 12:04:35.735493  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 12:04:35.749072  241491 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 12:04:35.749099  241491 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 12:04:35.824370  241491 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 12:04:35.824396  241491 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 12:04:35.888976  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 12:04:35.889004  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 12:04:35.915194  241491 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 12:04:35.915223  241491 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 12:04:35.922827  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 12:04:35.924583  241491 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 12:04:35.924610  241491 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 12:04:36.040432  241491 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 12:04:36.040462  241491 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 12:04:36.043091  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 12:04:36.057180  241491 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 12:04:36.057202  241491 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 12:04:36.067221  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 12:04:36.107030  241491 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 12:04:36.107084  241491 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 12:04:36.123660  241491 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 12:04:36.123692  241491 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 12:04:36.140680  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 12:04:36.140713  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 12:04:36.193178  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 12:04:36.240756  241491 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 12:04:36.240781  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 12:04:36.248443  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 12:04:36.248475  241491 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 12:04:36.288508  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 12:04:36.288534  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 12:04:36.329772  241491 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 12:04:36.329805  241491 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 12:04:36.546200  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 12:04:36.563455  241491 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 12:04:36.563478  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 12:04:36.657833  241491 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 12:04:36.657864  241491 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 12:04:36.669090  241491 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 12:04:36.669115  241491 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 12:04:36.815978  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 12:04:36.880575  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 12:04:36.880599  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 12:04:36.955276  241491 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 12:04:36.955300  241491 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 12:04:37.117764  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 12:04:37.117797  241491 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 12:04:37.177645  241491 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 12:04:37.177681  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 12:04:37.372778  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 12:04:37.372826  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 12:04:37.449701  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 12:04:37.501248  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 12:04:37.501272  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 12:04:37.735423  241491 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 12:04:37.735456  241491 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 12:04:38.045253  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 12:04:39.547306  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.099410205s)
	I0729 12:04:39.547359  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.547371  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.547367  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.089426656s)
	I0729 12:04:39.547415  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.547425  241491 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.061004467s)
	I0729 12:04:39.547454  241491 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.061097534s)
	I0729 12:04:39.547453  241491 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 12:04:39.547527  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.978100223s)
	I0729 12:04:39.547556  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.547575  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.547432  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.548477  241491 node_ready.go:35] waiting up to 6m0s for node "addons-631322" to be "Ready" ...
	I0729 12:04:39.549806  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.549811  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.549819  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.549813  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.549831  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.549834  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.549832  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.549840  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.549844  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.549840  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.549865  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.549850  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.549851  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.549875  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:39.549952  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:39.550170  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.550187  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.550174  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.550231  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.550231  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:39.550207  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.550257  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:39.550261  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.550272  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:39.557120  241491 node_ready.go:49] node "addons-631322" has status "Ready":"True"
	I0729 12:04:39.557141  241491 node_ready.go:38] duration metric: took 8.642126ms for node "addons-631322" to be "Ready" ...
	I0729 12:04:39.557151  241491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:04:39.573461  241491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kr89x" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:40.063744  241491 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-631322" context rescaled to 1 replicas
	I0729 12:04:41.587429  241491 pod_ready.go:102] pod "coredns-7db6d8ff4d-kr89x" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:42.148385  241491 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 12:04:42.148440  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:42.151679  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:42.152138  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:42.152169  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:42.152334  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:42.152562  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:42.152764  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:42.152937  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:42.584682  241491 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 12:04:42.702635  241491 pod_ready.go:92] pod "coredns-7db6d8ff4d-kr89x" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:42.702661  241491 pod_ready.go:81] duration metric: took 3.129176071s for pod "coredns-7db6d8ff4d-kr89x" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:42.702671  241491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:42.794235  241491 addons.go:234] Setting addon gcp-auth=true in "addons-631322"
	I0729 12:04:42.794295  241491 host.go:66] Checking if "addons-631322" exists ...
	I0729 12:04:42.794608  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:42.794636  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:42.811033  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36855
	I0729 12:04:42.811492  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:42.811942  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:42.811966  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:42.812350  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:42.812893  241491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:42.812925  241491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:42.828956  241491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0729 12:04:42.829406  241491 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:42.829877  241491 main.go:141] libmachine: Using API Version  1
	I0729 12:04:42.829902  241491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:42.830275  241491 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:42.830507  241491 main.go:141] libmachine: (addons-631322) Calling .GetState
	I0729 12:04:42.832046  241491 main.go:141] libmachine: (addons-631322) Calling .DriverName
	I0729 12:04:42.832326  241491 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 12:04:42.832352  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHHostname
	I0729 12:04:42.835302  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:42.835724  241491 main.go:141] libmachine: (addons-631322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:2e:02", ip: ""} in network mk-addons-631322: {Iface:virbr1 ExpiryTime:2024-07-29 13:03:53 +0000 UTC Type:0 Mac:52:54:00:47:2e:02 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-631322 Clientid:01:52:54:00:47:2e:02}
	I0729 12:04:42.835751  241491 main.go:141] libmachine: (addons-631322) DBG | domain addons-631322 has defined IP address 192.168.39.55 and MAC address 52:54:00:47:2e:02 in network mk-addons-631322
	I0729 12:04:42.835899  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHPort
	I0729 12:04:42.836095  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHKeyPath
	I0729 12:04:42.836253  241491 main.go:141] libmachine: (addons-631322) Calling .GetSSHUsername
	I0729 12:04:42.836377  241491 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/addons-631322/id_rsa Username:docker}
	I0729 12:04:43.676676  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.064327203s)
	I0729 12:04:43.676735  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.676747  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.676808  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.04909102s)
	I0729 12:04:43.676867  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.015110392s)
	I0729 12:04:43.676895  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.676908  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.676866  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.676950  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.676945  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.754079078s)
	I0729 12:04:43.676984  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677002  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677032  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.677043  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.677052  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677060  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677124  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.677132  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.677141  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677147  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677155  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.634034481s)
	I0729 12:04:43.677187  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677200  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677274  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.610030183s)
	I0729 12:04:43.677292  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677299  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677365  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.484161305s)
	I0729 12:04:43.677384  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677395  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677463  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.131235328s)
	I0729 12:04:43.677480  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677490  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677618  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.861600082s)
	W0729 12:04:43.677647  241491 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 12:04:43.677674  241491 retry.go:31] will retry after 373.115146ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 12:04:43.677796  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.228035637s)
	I0729 12:04:43.677823  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.677834  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.677942  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.677960  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.677963  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.677990  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.677993  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.677998  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.678004  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.678010  241491 addons.go:475] Verifying addon ingress=true in "addons-631322"
	I0729 12:04:43.678061  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.678139  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.678148  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.678175  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.678182  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.678191  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.678197  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.678248  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.678266  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.678271  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.678279  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.678285  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.678012  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.678113  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.678094  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.679534  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.679545  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.679552  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.680817  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.680823  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.680830  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.680839  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.680846  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.680853  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.680875  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.680895  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.680900  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.680909  241491 addons.go:475] Verifying addon metrics-server=true in "addons-631322"
	I0729 12:04:43.681062  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.681088  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.681094  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.681102  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.681109  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.681163  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.681168  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.681193  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.681198  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.681203  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.681206  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.681211  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.681217  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.680847  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.681657  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.681686  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.681693  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.682146  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.682163  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.682173  241491 addons.go:475] Verifying addon registry=true in "addons-631322"
	I0729 12:04:43.682708  241491 out.go:177] * Verifying ingress addon...
	I0729 12:04:43.682734  241491 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-631322 service yakd-dashboard -n yakd-dashboard
	
	I0729 12:04:43.683694  241491 out.go:177] * Verifying registry addon...
	I0729 12:04:43.682847  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.684286  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.682867  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.685696  241491 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 12:04:43.685757  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.685788  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.685801  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:43.686395  241491 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 12:04:43.699861  241491 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 12:04:43.699883  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:43.700250  241491 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 12:04:43.700267  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:43.710848  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.710871  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.711200  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.711224  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 12:04:43.711346  241491 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0729 12:04:43.729266  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:43.729298  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:43.729636  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:43.729641  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:43.729659  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:44.051662  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 12:04:44.190251  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:44.191649  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:44.696963  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:44.696984  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:44.726322  241491 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:45.208892  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:45.209042  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:45.729574  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:45.730680  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:45.819039  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.773719865s)
	I0729 12:04:45.819045  241491 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.986693401s)
	I0729 12:04:45.819107  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:45.819164  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:45.819491  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:45.819557  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:45.819575  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:45.819587  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:45.819525  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:45.819933  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:45.819950  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:45.819982  241491 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-631322"
	I0729 12:04:45.820708  241491 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 12:04:45.821543  241491 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 12:04:45.822813  241491 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 12:04:45.823750  241491 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 12:04:45.823872  241491 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 12:04:45.823892  241491 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 12:04:45.845539  241491 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 12:04:45.845563  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:45.976753  241491 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 12:04:45.976785  241491 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 12:04:46.032473  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.98075893s)
	I0729 12:04:46.032531  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:46.032541  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:46.032948  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:46.032976  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:46.032985  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:46.032994  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:46.033002  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:46.033375  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:46.033397  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:46.033413  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:46.068897  241491 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 12:04:46.068924  241491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 12:04:46.123660  241491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 12:04:46.192329  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:46.192904  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:46.331911  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:46.692963  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:46.693566  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:46.829766  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:47.214496  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:47.214677  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:47.251947  241491 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:47.306070  241491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.182359347s)
	I0729 12:04:47.306128  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:47.306139  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:47.306466  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:47.306522  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:47.306537  241491 main.go:141] libmachine: Making call to close driver server
	I0729 12:04:47.306545  241491 main.go:141] libmachine: (addons-631322) Calling .Close
	I0729 12:04:47.306886  241491 main.go:141] libmachine: (addons-631322) DBG | Closing plugin on server side
	I0729 12:04:47.306901  241491 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:04:47.306915  241491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:04:47.308583  241491 addons.go:475] Verifying addon gcp-auth=true in "addons-631322"
	I0729 12:04:47.310221  241491 out.go:177] * Verifying gcp-auth addon...
	I0729 12:04:47.312496  241491 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 12:04:47.326494  241491 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 12:04:47.326517  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:47.345624  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:47.692358  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:47.694535  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:47.826637  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:47.834554  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:48.191118  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:48.193228  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:48.316327  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:48.330133  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:48.691238  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:48.691918  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:48.816814  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:48.828993  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:49.194268  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:49.195939  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:49.315987  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:49.329152  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:49.692457  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:49.692715  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:49.708099  241491 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:49.815977  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:49.829906  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:50.190475  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:50.190620  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:50.316866  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:50.330713  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:50.691443  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:50.691949  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:50.816495  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:50.828939  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:51.192140  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:51.193220  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:51.316320  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:51.328417  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:51.694599  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:51.697217  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:51.709108  241491 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:51.816259  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:51.830420  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:52.194727  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:52.195373  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:52.208346  241491 pod_ready.go:97] pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.55 HostIPs:[{IP:192.168.39.
55}] PodIP: PodIPs:[] StartTime:2024-07-29 12:04:35 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 12:04:41 +0000 UTC,FinishedAt:2024-07-29 12:04:51 +0000 UTC,ContainerID:cri-o://1a825b52954dd55e54fadee5c88b544c2af81bfd3086a276b8dc866a111abc82,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://1a825b52954dd55e54fadee5c88b544c2af81bfd3086a276b8dc866a111abc82 Started:0xc0022a23c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 12:04:52.208381  241491 pod_ready.go:81] duration metric: took 9.505702586s for pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace to be "Ready" ...
	E0729 12:04:52.208395  241491 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-x4wkw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 12:04:35 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.55 HostIPs:[{IP:192.168.39.55}] PodIP: PodIPs:[] StartTime:2024-07-29 12:04:35 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 12:04:41 +0000 UTC,FinishedAt:2024-07-29 12:04:51 +0000 UTC,ContainerID:cri-o://1a825b52954dd55e54fadee5c88b544c2af81bfd3086a276b8dc866a111abc82,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://1a825b52954dd55e54fadee5c88b544c2af81bfd3086a276b8dc866a111abc82 Started:0xc0022a23c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 12:04:52.208405  241491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.212489  241491 pod_ready.go:92] pod "etcd-addons-631322" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:52.212510  241491 pod_ready.go:81] duration metric: took 4.09539ms for pod "etcd-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.212522  241491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.218174  241491 pod_ready.go:92] pod "kube-apiserver-addons-631322" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:52.218195  241491 pod_ready.go:81] duration metric: took 5.665997ms for pod "kube-apiserver-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.218208  241491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.225050  241491 pod_ready.go:92] pod "kube-controller-manager-addons-631322" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:52.225070  241491 pod_ready.go:81] duration metric: took 6.854586ms for pod "kube-controller-manager-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.225084  241491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fp2hh" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.230399  241491 pod_ready.go:92] pod "kube-proxy-fp2hh" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:52.230419  241491 pod_ready.go:81] duration metric: took 5.327391ms for pod "kube-proxy-fp2hh" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.230431  241491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.316447  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:52.328434  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:52.605825  241491 pod_ready.go:92] pod "kube-scheduler-addons-631322" in "kube-system" namespace has status "Ready":"True"
	I0729 12:04:52.605849  241491 pod_ready.go:81] duration metric: took 375.412476ms for pod "kube-scheduler-addons-631322" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.605859  241491 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace to be "Ready" ...
	I0729 12:04:52.692519  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:52.694347  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:52.815470  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:52.832673  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:53.190792  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:53.191462  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:53.316242  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:53.328338  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:53.690989  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:53.691043  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:53.816101  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:53.834176  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:54.190234  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:54.193578  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:54.316680  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:54.329022  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:54.612173  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:54.689971  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:54.691199  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:54.816083  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:54.828507  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:55.192572  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:55.192701  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:55.316823  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:55.329406  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:55.692475  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:55.692554  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:55.816984  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:55.830135  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:56.194285  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:56.195622  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:56.316734  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:56.334145  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:56.613221  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:56.690701  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:56.693975  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:56.816005  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:56.830119  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:57.192678  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:57.192997  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:57.317445  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:57.329198  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:57.691687  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:57.697745  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:57.817484  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:57.828890  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:58.189208  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:58.191591  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:58.316682  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:58.329258  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:58.690894  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:58.691232  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:58.816686  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:58.829181  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:59.111151  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:04:59.190122  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:59.192025  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:59.315658  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:59.328755  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:04:59.692475  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:04:59.692639  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:04:59.816914  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:04:59.831407  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:00.189773  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:00.191773  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:00.315978  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:00.329596  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:00.690048  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:00.691692  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:00.817568  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:00.829040  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:01.111888  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:05:01.190808  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:01.191667  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:01.316826  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:01.329279  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:01.694022  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:01.694464  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:01.816696  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:01.829508  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:02.190999  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:02.191485  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:02.316260  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:02.328267  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:02.691173  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:02.692440  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:02.816057  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:02.830309  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:03.112051  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:05:03.190466  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:03.192026  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:03.315956  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:03.329630  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:03.689836  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:03.691261  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:03.816306  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:03.828943  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:04.266191  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:04.270289  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:04.316066  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:04.330314  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:04.691110  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:04.692075  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:04.817005  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:04.836774  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:05.112387  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:05:05.191403  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:05.192754  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:05.316472  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:05.328521  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:05.693813  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:05.694077  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:05.817258  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:05.828676  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:06.191984  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:06.192746  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:06.633597  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:06.638912  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:06.692064  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:06.692491  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:06.815747  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:06.829483  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:07.192691  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:07.192898  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:07.316536  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:07.329583  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:07.611745  241491 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"False"
	I0729 12:05:07.692152  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:07.692651  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:07.816678  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:07.829187  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:08.111939  241491 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace has status "Ready":"True"
	I0729 12:05:08.111961  241491 pod_ready.go:81] duration metric: took 15.506095853s for pod "nvidia-device-plugin-daemonset-m8p57" in "kube-system" namespace to be "Ready" ...
	I0729 12:05:08.111989  241491 pod_ready.go:38] duration metric: took 28.554818245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:05:08.112006  241491 api_server.go:52] waiting for apiserver process to appear ...
	I0729 12:05:08.112055  241491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:05:08.131111  241491 api_server.go:72] duration metric: took 33.162863602s to wait for apiserver process to appear ...
	I0729 12:05:08.131136  241491 api_server.go:88] waiting for apiserver healthz status ...
	I0729 12:05:08.131162  241491 api_server.go:253] Checking apiserver healthz at https://192.168.39.55:8443/healthz ...
	I0729 12:05:08.135210  241491 api_server.go:279] https://192.168.39.55:8443/healthz returned 200:
	ok
	I0729 12:05:08.136742  241491 api_server.go:141] control plane version: v1.30.3
	I0729 12:05:08.136762  241491 api_server.go:131] duration metric: took 5.62042ms to wait for apiserver health ...
	I0729 12:05:08.136770  241491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:05:08.148121  241491 system_pods.go:59] 18 kube-system pods found
	I0729 12:05:08.148146  241491 system_pods.go:61] "coredns-7db6d8ff4d-kr89x" [d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef] Running
	I0729 12:05:08.148154  241491 system_pods.go:61] "csi-hostpath-attacher-0" [e09927aa-20b1-40f7-ab75-fa9174452e6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 12:05:08.148161  241491 system_pods.go:61] "csi-hostpath-resizer-0" [ab998654-3f6a-44cd-974c-011bed87cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 12:05:08.148167  241491 system_pods.go:61] "csi-hostpathplugin-kklhd" [b8cf1b29-7f6d-42f2-9ff3-42552849b06f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 12:05:08.148172  241491 system_pods.go:61] "etcd-addons-631322" [edef4d1d-878d-41c8-9f1f-5f905891bb1c] Running
	I0729 12:05:08.148177  241491 system_pods.go:61] "kube-apiserver-addons-631322" [dd83ef61-2e39-4360-a7c3-4e39579177c6] Running
	I0729 12:05:08.148181  241491 system_pods.go:61] "kube-controller-manager-addons-631322" [6d69ce90-11a1-4768-94f6-c42861eddc35] Running
	I0729 12:05:08.148189  241491 system_pods.go:61] "kube-ingress-dns-minikube" [ed104c0c-e54d-49f9-a443-bfeafe4cd1ef] Running
	I0729 12:05:08.148192  241491 system_pods.go:61] "kube-proxy-fp2hh" [02cf9a19-5834-400f-a520-406afe4dba9c] Running
	I0729 12:05:08.148199  241491 system_pods.go:61] "kube-scheduler-addons-631322" [c9f210fd-eaf2-49d3-b379-0ad0f1f2b54b] Running
	I0729 12:05:08.148203  241491 system_pods.go:61] "metrics-server-c59844bb4-5ckgn" [635ee934-5845-4b41-b592-e16cd7ca050a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 12:05:08.148211  241491 system_pods.go:61] "nvidia-device-plugin-daemonset-m8p57" [0f635111-3024-43e1-bb48-73600f90a010] Running
	I0729 12:05:08.148217  241491 system_pods.go:61] "registry-656c9c8d9c-n8scc" [01e3eb64-3cfb-4c8e-885d-d83fc4087b8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 12:05:08.148222  241491 system_pods.go:61] "registry-proxy-74lcm" [24d73911-de6a-48f4-94d5-427b8aabe740] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 12:05:08.148229  241491 system_pods.go:61] "snapshot-controller-745499f584-v67xh" [3ca7bbdd-71f4-4b73-81d2-43a1c496b3f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 12:05:08.148235  241491 system_pods.go:61] "snapshot-controller-745499f584-z8fzs" [c6a55e4c-6022-48ae-81b6-392c34013809] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 12:05:08.148239  241491 system_pods.go:61] "storage-provisioner" [1db38ec0-1c47-4390-9f77-8348dbc84682] Running
	I0729 12:05:08.148244  241491 system_pods.go:61] "tiller-deploy-6677d64bcd-sngfl" [9dcb8698-4a1e-4840-be97-c1bd6d3fd69a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 12:05:08.148251  241491 system_pods.go:74] duration metric: took 11.475234ms to wait for pod list to return data ...
	I0729 12:05:08.148260  241491 default_sa.go:34] waiting for default service account to be created ...
	I0729 12:05:08.150080  241491 default_sa.go:45] found service account: "default"
	I0729 12:05:08.150096  241491 default_sa.go:55] duration metric: took 1.830977ms for default service account to be created ...
	I0729 12:05:08.150102  241491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 12:05:08.159660  241491 system_pods.go:86] 18 kube-system pods found
	I0729 12:05:08.159686  241491 system_pods.go:89] "coredns-7db6d8ff4d-kr89x" [d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef] Running
	I0729 12:05:08.159696  241491 system_pods.go:89] "csi-hostpath-attacher-0" [e09927aa-20b1-40f7-ab75-fa9174452e6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 12:05:08.159702  241491 system_pods.go:89] "csi-hostpath-resizer-0" [ab998654-3f6a-44cd-974c-011bed87cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 12:05:08.159710  241491 system_pods.go:89] "csi-hostpathplugin-kklhd" [b8cf1b29-7f6d-42f2-9ff3-42552849b06f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 12:05:08.159716  241491 system_pods.go:89] "etcd-addons-631322" [edef4d1d-878d-41c8-9f1f-5f905891bb1c] Running
	I0729 12:05:08.159721  241491 system_pods.go:89] "kube-apiserver-addons-631322" [dd83ef61-2e39-4360-a7c3-4e39579177c6] Running
	I0729 12:05:08.159725  241491 system_pods.go:89] "kube-controller-manager-addons-631322" [6d69ce90-11a1-4768-94f6-c42861eddc35] Running
	I0729 12:05:08.159731  241491 system_pods.go:89] "kube-ingress-dns-minikube" [ed104c0c-e54d-49f9-a443-bfeafe4cd1ef] Running
	I0729 12:05:08.159737  241491 system_pods.go:89] "kube-proxy-fp2hh" [02cf9a19-5834-400f-a520-406afe4dba9c] Running
	I0729 12:05:08.159742  241491 system_pods.go:89] "kube-scheduler-addons-631322" [c9f210fd-eaf2-49d3-b379-0ad0f1f2b54b] Running
	I0729 12:05:08.159748  241491 system_pods.go:89] "metrics-server-c59844bb4-5ckgn" [635ee934-5845-4b41-b592-e16cd7ca050a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 12:05:08.159757  241491 system_pods.go:89] "nvidia-device-plugin-daemonset-m8p57" [0f635111-3024-43e1-bb48-73600f90a010] Running
	I0729 12:05:08.159762  241491 system_pods.go:89] "registry-656c9c8d9c-n8scc" [01e3eb64-3cfb-4c8e-885d-d83fc4087b8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 12:05:08.159769  241491 system_pods.go:89] "registry-proxy-74lcm" [24d73911-de6a-48f4-94d5-427b8aabe740] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 12:05:08.159776  241491 system_pods.go:89] "snapshot-controller-745499f584-v67xh" [3ca7bbdd-71f4-4b73-81d2-43a1c496b3f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 12:05:08.159785  241491 system_pods.go:89] "snapshot-controller-745499f584-z8fzs" [c6a55e4c-6022-48ae-81b6-392c34013809] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 12:05:08.159791  241491 system_pods.go:89] "storage-provisioner" [1db38ec0-1c47-4390-9f77-8348dbc84682] Running
	I0729 12:05:08.159796  241491 system_pods.go:89] "tiller-deploy-6677d64bcd-sngfl" [9dcb8698-4a1e-4840-be97-c1bd6d3fd69a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 12:05:08.159802  241491 system_pods.go:126] duration metric: took 9.695717ms to wait for k8s-apps to be running ...
	I0729 12:05:08.159824  241491 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 12:05:08.159868  241491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:05:08.175772  241491 system_svc.go:56] duration metric: took 15.939846ms WaitForService to wait for kubelet
	I0729 12:05:08.175801  241491 kubeadm.go:582] duration metric: took 33.20755649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:05:08.175822  241491 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:05:08.178747  241491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:05:08.178772  241491 node_conditions.go:123] node cpu capacity is 2
	I0729 12:05:08.178786  241491 node_conditions.go:105] duration metric: took 2.959334ms to run NodePressure ...
	I0729 12:05:08.178798  241491 start.go:241] waiting for startup goroutines ...
	I0729 12:05:08.190942  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:08.191959  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:08.316221  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:08.330455  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:08.693379  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:08.693498  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:08.816832  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:08.830006  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:09.190682  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:09.192430  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:09.316884  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:09.330356  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:09.698091  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:09.698220  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:09.817131  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:09.829537  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:10.190773  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:10.190980  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:10.316353  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:10.329085  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:10.690659  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:10.691512  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:10.817188  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:10.829853  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:11.190260  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:11.192320  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:11.316336  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:11.328934  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:11.692930  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:11.693136  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:11.816803  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:11.829359  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:12.191551  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:12.192651  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:12.316526  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:12.329408  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:12.692673  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:12.704178  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:12.816243  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:12.828543  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:13.191014  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:13.192776  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:13.315638  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:13.329035  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:13.691868  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:13.694016  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:13.816521  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:13.830131  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:14.190504  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:14.191602  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:14.317223  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:14.330798  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:15.050155  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:15.052852  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:15.053161  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:15.053596  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:15.238095  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:15.240787  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:15.316876  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:15.329315  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:15.691970  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:15.694359  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:15.823680  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:15.828601  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:16.190285  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:16.192112  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:16.315891  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:16.329343  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:16.690185  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:16.692637  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:16.816784  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:16.833950  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:17.192043  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:17.194073  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:17.316816  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:17.329932  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:17.691981  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:17.693247  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:17.816505  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:17.829349  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:18.192362  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:18.192674  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:18.316312  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:18.328867  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:18.690266  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:18.691975  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:18.816097  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:18.828208  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:19.190046  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:19.191489  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:19.316567  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:19.328847  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:19.691160  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:19.691218  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:19.816126  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:19.829340  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:20.189847  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:20.191039  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:20.316417  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:20.328504  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:20.690312  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:20.691597  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:20.817325  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:20.828288  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:21.189844  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:21.192150  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:21.316049  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:21.330998  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:21.690945  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:21.691557  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:21.816514  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:21.829857  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:22.192081  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:22.192117  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:22.316357  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:22.331438  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:22.689591  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:22.691972  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:22.815996  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:22.829352  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:23.190928  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:23.191212  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:23.316456  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:23.330047  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:23.691397  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:23.692621  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:23.816513  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:23.829112  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:24.190244  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:24.191537  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:24.316829  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:24.331586  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:24.690639  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:24.692067  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:24.816317  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:24.829571  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:25.189965  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:25.191271  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:25.316380  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:25.328912  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:25.696854  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:25.697143  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:25.816890  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:25.829157  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:26.192806  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:26.192911  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:26.318005  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:26.331251  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:26.691026  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:26.691126  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:26.816244  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:26.829179  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:27.192046  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:27.192335  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:27.316253  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:27.328835  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:27.690175  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:27.691798  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:27.818354  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:27.829258  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:28.190034  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:28.191390  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:28.317842  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:28.330837  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:28.690275  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:28.691255  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:28.816630  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:28.829256  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:29.189827  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:29.191168  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:29.316071  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:29.328477  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:29.690343  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:29.691836  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:29.817056  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:29.828696  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:30.190387  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:30.190717  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:30.317121  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:30.329671  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:30.691294  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:30.691472  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:30.816745  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:30.833535  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:31.192155  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:31.193556  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:31.316717  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:31.329352  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:31.691438  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:31.691523  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:31.817065  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:31.830243  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:32.193234  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:32.198143  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:32.317007  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:32.336224  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:32.733778  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:32.735806  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:32.816673  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:32.828943  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:33.191431  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:33.193056  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:33.316017  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:33.328514  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:33.691826  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:33.693171  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:33.816224  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:33.830046  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:34.191771  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:34.192215  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:34.316698  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:34.330086  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:34.690684  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:34.691643  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:34.817370  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:34.828610  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:35.525413  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:35.540456  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:35.541760  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:35.543399  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:35.690999  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:35.691185  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:35.817067  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:35.832760  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:36.190192  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:36.196667  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:36.316779  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:36.330403  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:36.690236  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:36.692049  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 12:05:36.816470  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:36.829186  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:37.189688  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:37.191666  241491 kapi.go:107] duration metric: took 53.505272119s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 12:05:37.318027  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:37.330913  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:37.691127  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:37.817220  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:37.829766  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:38.190790  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:38.317331  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:38.328871  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:38.690882  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:38.816647  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:38.829905  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:39.189681  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:39.316658  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:39.329394  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:39.691185  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:39.819024  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:39.829757  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:40.190117  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:40.315974  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:40.328861  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:40.690449  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:40.816082  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:40.830156  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:41.192307  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:41.316104  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:41.328477  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:41.689906  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:41.817639  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:41.829742  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:42.190408  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:42.316080  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:42.330211  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:42.690730  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:42.817037  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:42.829413  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:43.190798  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:43.316721  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:43.329777  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:43.689712  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:43.816256  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:43.829122  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:44.190526  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:44.316579  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:44.329806  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:44.690213  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:44.817058  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:44.829684  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:45.189779  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:45.316339  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:45.328656  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:45.690168  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:45.815828  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:45.830437  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:46.192322  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:46.316917  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:46.329692  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:46.690751  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:46.817618  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:46.832965  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:47.190763  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:47.470384  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:47.470733  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:47.690363  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:47.816088  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:47.831066  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:48.191584  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:48.317708  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:48.330899  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:48.690438  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:48.816357  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:48.828457  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:49.190527  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:49.316281  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:49.328176  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:49.689971  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:49.816505  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:49.829006  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:50.190378  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:50.316439  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:50.329015  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:50.690281  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:50.816430  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:50.830851  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:51.190561  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:51.316812  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:51.328954  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:51.691452  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:51.816038  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:51.830019  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:52.190991  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:52.316772  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:52.332123  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:52.690932  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:52.816139  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:52.831022  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:53.190118  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:53.315383  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:53.328703  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:53.690930  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:53.816457  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:53.828733  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:54.189751  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:54.316457  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:54.328842  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:54.690653  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:54.816606  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:54.829351  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:55.189651  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:55.316426  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:55.332885  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:55.692856  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:55.815909  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:55.829486  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:56.190623  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:56.315973  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:56.329317  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:56.690010  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:56.816507  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:56.833979  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:57.190741  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:57.316274  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:57.328320  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:57.692071  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:57.819253  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:57.833028  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:58.189459  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:58.317652  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:58.328606  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:58.691335  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:58.816311  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:58.829292  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:59.190013  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:59.316550  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:59.329071  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:05:59.690561  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:05:59.816172  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:05:59.828384  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:00.190224  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:00.320613  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:00.330019  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:00.689732  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:00.816297  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:00.828824  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:01.494432  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:01.495888  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:01.515272  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:01.694378  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:01.816080  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:01.828280  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:02.189721  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:02.316547  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:02.328264  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:02.690662  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:02.816104  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:02.828538  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:03.189886  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:03.317444  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:03.340143  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:03.691157  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:03.816047  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:03.829412  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:04.191171  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:04.315919  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:04.329178  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:04.690462  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:04.817063  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:04.830110  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:05.196546  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:05.317155  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:05.328902  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:05.690561  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:05.818375  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:05.835684  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:06.190523  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:06.316080  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:06.328177  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:06.695260  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:06.816784  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:06.830615  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:07.189730  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:07.316390  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:07.328571  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:07.690070  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:07.816821  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:07.829479  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:08.190336  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:08.318256  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:08.329510  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:08.690546  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:08.816569  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:08.828620  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:09.193008  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:09.316434  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:09.329274  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:09.690437  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:09.816945  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:09.829442  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:10.615542  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:10.616744  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:10.618278  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:10.690327  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:10.816413  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:10.834615  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:11.190597  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:11.316610  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:11.328771  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:11.690869  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:11.816584  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:11.829444  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:12.194664  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:12.316422  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:12.329487  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:12.691236  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:12.815938  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:12.834241  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 12:06:13.190056  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:13.323033  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:13.332046  241491 kapi.go:107] duration metric: took 1m27.508290896s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 12:06:13.690671  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:13.816093  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:14.190966  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:14.316507  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:14.691111  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:14.817480  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:15.191185  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:15.316064  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:15.690881  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:15.817059  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:16.190441  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:16.316487  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:16.690972  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:16.817161  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:17.190645  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:17.316766  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:17.690508  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:17.816235  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:18.190649  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:18.316456  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:18.691106  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:18.816138  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:19.192869  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:19.316845  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:19.690338  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:19.815507  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:20.191638  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:20.316314  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:20.690754  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:20.817207  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:21.190533  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:21.318120  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:21.690915  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:21.816061  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:22.190401  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:22.315891  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:22.689866  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:22.817269  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:23.191052  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:23.317209  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:23.690117  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:23.816697  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:24.191064  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:24.316504  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:24.690547  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:24.816193  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:25.190695  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:25.316591  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:25.691021  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:25.817103  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:26.190846  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:26.316977  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:26.690596  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:26.816899  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:27.190276  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:27.315983  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:27.690458  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:27.816010  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:28.190559  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:28.316638  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:28.691520  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:28.816990  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:29.190891  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:29.316613  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:29.691466  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:29.816734  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:30.189734  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:30.316746  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:30.690601  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:30.816693  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:31.190843  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:31.316725  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:31.690432  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:31.816231  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:32.190988  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:32.317082  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:32.690083  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:32.817008  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:33.190468  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:33.316193  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:33.692005  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:33.817554  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:34.190226  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:34.316836  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:34.690141  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:34.816962  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:35.190216  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:35.315943  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:35.690597  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:35.816203  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:36.190630  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:36.317887  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:36.690303  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:36.816065  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:37.191289  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:37.316111  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:37.690401  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:37.816095  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:38.191168  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:38.315911  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:38.690014  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:38.816202  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:39.189770  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:39.316576  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:39.691167  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:39.816942  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:40.190145  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:40.315944  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:40.689989  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:40.816950  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:41.190489  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:41.318211  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:41.690687  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:41.816173  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:42.190745  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:42.316141  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:42.690147  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:42.815808  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:43.190117  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:43.315831  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:43.689924  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:43.815723  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:44.191171  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:44.317458  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:44.691806  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:44.817163  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:45.190479  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:45.316096  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:45.690993  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:45.817262  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:46.191300  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:46.317312  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:46.690736  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:46.817256  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:47.190655  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:47.316147  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:47.690225  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:47.815924  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:48.189898  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:48.317612  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:48.692401  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:48.816621  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:49.190516  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:49.316577  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:49.690812  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:49.816682  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:50.190854  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:50.316984  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:50.690514  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:50.816127  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:51.190788  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:51.316891  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:51.690126  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:51.815622  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:52.191061  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:52.316914  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:52.689901  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:52.816655  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:53.190825  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:53.316470  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:53.691279  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:53.816142  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:54.190908  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:54.316412  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:54.691044  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:54.817149  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:55.191176  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:55.315864  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:55.690513  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:55.816124  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:56.191005  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:56.317779  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:56.690154  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:56.815765  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:57.190260  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:57.316113  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:57.692405  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:57.817487  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:58.194315  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:58.315545  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:58.690582  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:58.816492  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:59.190423  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:59.315945  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:06:59.690116  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:06:59.816254  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:00.197132  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:00.316754  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:00.691207  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:00.815994  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:01.190720  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:01.317650  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:01.690040  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:01.816556  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:02.190765  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:02.316719  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:02.689845  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:02.816540  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:03.191315  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:03.316281  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:03.690529  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:03.816047  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:04.191177  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:04.316659  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:04.694100  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:04.816828  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:05.193884  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:05.317337  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:05.690723  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:05.816452  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:06.567016  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:06.567294  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:06.692157  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:06.816158  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:07.189998  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:07.316903  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:07.691154  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:07.815950  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:08.192386  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:08.321263  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:08.769351  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:09.055973  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:09.193116  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:09.317175  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:09.693523  241491 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 12:07:09.816135  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:10.190406  241491 kapi.go:107] duration metric: took 2m26.504709624s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 12:07:10.316310  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:10.816264  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:11.316903  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:11.816362  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:12.316604  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:12.816140  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:13.316618  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:13.821625  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:14.316424  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:14.816208  241491 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 12:07:15.316642  241491 kapi.go:107] duration metric: took 2m28.00414682s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 12:07:15.318590  241491 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-631322 cluster.
	I0729 12:07:15.319990  241491 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 12:07:15.321385  241491 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 12:07:15.323147  241491 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, helm-tiller, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 12:07:15.324417  241491 addons.go:510] duration metric: took 2m40.356141624s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server helm-tiller yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 12:07:15.324461  241491 start.go:246] waiting for cluster config update ...
	I0729 12:07:15.324481  241491 start.go:255] writing updated cluster config ...
	I0729 12:07:15.324741  241491 ssh_runner.go:195] Run: rm -f paused
	I0729 12:07:15.376478  241491 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 12:07:15.378201  241491 out.go:177] * Done! kubectl is now configured to use "addons-631322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.190529563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255170190479413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e22702ed-88c9-4b96-89f6-f40cbb53520f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.191126823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8784a303-f406-4199-8a3f-d2b602667fa3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.191204530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8784a303-f406-4199-8a3f-d2b602667fa3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.191682094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeb506ebad23e579c32a34b6ed74d617de715334a823707fa8892e5be989d06d,PodSandboxId:cb02b74aaacead2f0ef1f84e55d0f9e3060215739375a80054c3faac212ef987,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722255045876457704,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-gks46,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b899b274-8d19-46a4-8c01-0d036b3673f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9a3a1,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4288c06debbccbf482e986922ea83a69b419795eb7beda8b3580a275e03d93,PodSandboxId:f4a1507010891a551569b6c1429734d6a3823d566eacf4922322cb84bc810b7d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722254940900728668,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f7h5m,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab0fd630-85fb-4a20-9d29-abe07d251a64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 79624eb1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9528646d04ce9dea650837a283758fbaa7fcb6310253454fb7481f2ca1b76f1,PodSandboxId:7e4a70225bfc98bf3d97c98460af934b0a9605e5283721c520c562ebe38f584d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722254904506567824,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5920ddfd-ff15-402c-bf7c-8b1f9591b455,},Annotations:map[string]string{io.kubernetes.container.hash: 5fe8171,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9dc9fcabacb6f7c1374f2a780ead1dc62796f077374f45e61565f05921326fb,PodSandboxId:2826518d0876d160a2c9dc207c5bd2474dcd7c6909372b026349bc649d21eda0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722254843087594326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a129df2-ed1b-450f-8973-43601739163b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ec12179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886,PodSandboxId:a776831ecd944111f46251bf0771aec3efd83ec07b0e08b901ff973ef601655b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722254717898909555,Labels:map[string]string{io.kubernetes.container.name: metrics-se
rver,io.kubernetes.pod.name: metrics-server-c59844bb4-5ckgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 635ee934-5845-4b41-b592-e16cd7ca050a,},Annotations:map[string]string{io.kubernetes.container.hash: daec34bb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce0435c87268ba17b265e3a13650802a9e3ca598dc724bac205c2de6e3c4d93,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cre
atedAt:1722254682265900463,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d217b1229e1c133c37ba176a7bd91ea4ed0d0d4bda1dd88565332df357407d24,PodSandboxId:66da728d5e7b04693d0cf4161ac5178228bae4d7034a36b3f27ef72407bed429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172225468109690442
3,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kr89x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef,},Annotations:map[string]string{io.kubernetes.container.hash: 792af7aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55022b7395a488f0fd588d0653108db346a81bcae1f44db3b2d05be8712a4bdf,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254680744098489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5341ae2d216014521265ad07071eefe3458dfc8c304669e6ea8cb58ca3e824,PodSandboxId:bb587bf7205caa4e53f071e84638666a0b5a0bab0c1b8bbb44fd1496661030b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254678887607358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fp2hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02cf9a19-5834-400f-a520-406afe4dba9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8022a85f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df940fb1a7f53edf98b0e5f14080f07a0a8dd980d3700f18a712e565cec5b591,PodSandboxId:75b5841d4238f49f8c1f520af3c49be593e03822d3d694ba9f778f01568b7c9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254655845696728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112922532fc2751ee1435086e09f044d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390f0d98bf88b606b99570e9b443a5d4b2c3274a3e2194b7631381ac9de814a0,PodSandboxId:65699b4b1d01d94c4b79ef3fb21a8ee1640942ba388402f8b8a38a6eaaa03c72,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254655838597899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f8b3510f968e4c13bd7a9e85352278,},Annotations:map[string]string{io.kubernetes.container.hash: 92057ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cd83442f243c5e0d4a116a9feadb062cd45580e09370ba70ecc21fec28b1f4,PodSandboxId:36fa1f60dada38d120f0aae168be4757feeca72e805f5b7dfa94c16065ac0df2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254655817416641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6ea7ff5086f4730a2c156e8bde3484,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c99c28f8c244cfea9a64565c9b329d18dfd498d40892f1cc76609af13ccf52,PodSandboxId:19b45a3ca0106486aa547c23f93caf6928b6b3f91f18945ebed9c7c2deb97604,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb
2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254655751083325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbda3c748cc325186b62b37a762002d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8784a303-f406-4199-8a3f-d2b602667fa3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.228136762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40b38592-8117-4642-ab8d-86353a62857b name=/runtime.v1.RuntimeService/Version
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.228237794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40b38592-8117-4642-ab8d-86353a62857b name=/runtime.v1.RuntimeService/Version
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.229271994Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f3ed4f0-4804-4f8a-8e23-ff5f2311fdd0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.230756580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255170230730180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f3ed4f0-4804-4f8a-8e23-ff5f2311fdd0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.231347815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57ba5089-b6d0-46fa-9a17-cafed76196be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.231401881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57ba5089-b6d0-46fa-9a17-cafed76196be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.231897079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeb506ebad23e579c32a34b6ed74d617de715334a823707fa8892e5be989d06d,PodSandboxId:cb02b74aaacead2f0ef1f84e55d0f9e3060215739375a80054c3faac212ef987,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722255045876457704,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-gks46,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b899b274-8d19-46a4-8c01-0d036b3673f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9a3a1,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4288c06debbccbf482e986922ea83a69b419795eb7beda8b3580a275e03d93,PodSandboxId:f4a1507010891a551569b6c1429734d6a3823d566eacf4922322cb84bc810b7d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722254940900728668,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f7h5m,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab0fd630-85fb-4a20-9d29-abe07d251a64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 79624eb1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9528646d04ce9dea650837a283758fbaa7fcb6310253454fb7481f2ca1b76f1,PodSandboxId:7e4a70225bfc98bf3d97c98460af934b0a9605e5283721c520c562ebe38f584d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722254904506567824,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5920ddfd-ff15-402c-bf7c-8b1f9591b455,},Annotations:map[string]string{io.kubernetes.container.hash: 5fe8171,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9dc9fcabacb6f7c1374f2a780ead1dc62796f077374f45e61565f05921326fb,PodSandboxId:2826518d0876d160a2c9dc207c5bd2474dcd7c6909372b026349bc649d21eda0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722254843087594326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a129df2-ed1b-450f-8973-43601739163b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ec12179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886,PodSandboxId:a776831ecd944111f46251bf0771aec3efd83ec07b0e08b901ff973ef601655b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722254717898909555,Labels:map[string]string{io.kubernetes.container.name: metrics-se
rver,io.kubernetes.pod.name: metrics-server-c59844bb4-5ckgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 635ee934-5845-4b41-b592-e16cd7ca050a,},Annotations:map[string]string{io.kubernetes.container.hash: daec34bb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce0435c87268ba17b265e3a13650802a9e3ca598dc724bac205c2de6e3c4d93,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cre
atedAt:1722254682265900463,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d217b1229e1c133c37ba176a7bd91ea4ed0d0d4bda1dd88565332df357407d24,PodSandboxId:66da728d5e7b04693d0cf4161ac5178228bae4d7034a36b3f27ef72407bed429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172225468109690442
3,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kr89x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef,},Annotations:map[string]string{io.kubernetes.container.hash: 792af7aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55022b7395a488f0fd588d0653108db346a81bcae1f44db3b2d05be8712a4bdf,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254680744098489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5341ae2d216014521265ad07071eefe3458dfc8c304669e6ea8cb58ca3e824,PodSandboxId:bb587bf7205caa4e53f071e84638666a0b5a0bab0c1b8bbb44fd1496661030b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254678887607358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fp2hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02cf9a19-5834-400f-a520-406afe4dba9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8022a85f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df940fb1a7f53edf98b0e5f14080f07a0a8dd980d3700f18a712e565cec5b591,PodSandboxId:75b5841d4238f49f8c1f520af3c49be593e03822d3d694ba9f778f01568b7c9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254655845696728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112922532fc2751ee1435086e09f044d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390f0d98bf88b606b99570e9b443a5d4b2c3274a3e2194b7631381ac9de814a0,PodSandboxId:65699b4b1d01d94c4b79ef3fb21a8ee1640942ba388402f8b8a38a6eaaa03c72,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254655838597899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f8b3510f968e4c13bd7a9e85352278,},Annotations:map[string]string{io.kubernetes.container.hash: 92057ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cd83442f243c5e0d4a116a9feadb062cd45580e09370ba70ecc21fec28b1f4,PodSandboxId:36fa1f60dada38d120f0aae168be4757feeca72e805f5b7dfa94c16065ac0df2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254655817416641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6ea7ff5086f4730a2c156e8bde3484,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c99c28f8c244cfea9a64565c9b329d18dfd498d40892f1cc76609af13ccf52,PodSandboxId:19b45a3ca0106486aa547c23f93caf6928b6b3f91f18945ebed9c7c2deb97604,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb
2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254655751083325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbda3c748cc325186b62b37a762002d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57ba5089-b6d0-46fa-9a17-cafed76196be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.283372978Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad3a43d2-5745-4ac8-9eef-0a2e3e4747a1 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.283686058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad3a43d2-5745-4ac8-9eef-0a2e3e4747a1 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.285193639Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acc8963a-d7e7-4b27-b663-e68153d6db56 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.286489717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255170286463865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acc8963a-d7e7-4b27-b663-e68153d6db56 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.287140354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29d8ba8a-899d-4671-b3eb-5d6fc8314333 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.287226106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29d8ba8a-899d-4671-b3eb-5d6fc8314333 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.287523294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeb506ebad23e579c32a34b6ed74d617de715334a823707fa8892e5be989d06d,PodSandboxId:cb02b74aaacead2f0ef1f84e55d0f9e3060215739375a80054c3faac212ef987,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722255045876457704,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-gks46,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b899b274-8d19-46a4-8c01-0d036b3673f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9a3a1,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4288c06debbccbf482e986922ea83a69b419795eb7beda8b3580a275e03d93,PodSandboxId:f4a1507010891a551569b6c1429734d6a3823d566eacf4922322cb84bc810b7d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722254940900728668,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f7h5m,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab0fd630-85fb-4a20-9d29-abe07d251a64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 79624eb1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9528646d04ce9dea650837a283758fbaa7fcb6310253454fb7481f2ca1b76f1,PodSandboxId:7e4a70225bfc98bf3d97c98460af934b0a9605e5283721c520c562ebe38f584d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722254904506567824,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5920ddfd-ff15-402c-bf7c-8b1f9591b455,},Annotations:map[string]string{io.kubernetes.container.hash: 5fe8171,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9dc9fcabacb6f7c1374f2a780ead1dc62796f077374f45e61565f05921326fb,PodSandboxId:2826518d0876d160a2c9dc207c5bd2474dcd7c6909372b026349bc649d21eda0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722254843087594326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a129df2-ed1b-450f-8973-43601739163b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ec12179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886,PodSandboxId:a776831ecd944111f46251bf0771aec3efd83ec07b0e08b901ff973ef601655b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722254717898909555,Labels:map[string]string{io.kubernetes.container.name: metrics-se
rver,io.kubernetes.pod.name: metrics-server-c59844bb4-5ckgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 635ee934-5845-4b41-b592-e16cd7ca050a,},Annotations:map[string]string{io.kubernetes.container.hash: daec34bb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce0435c87268ba17b265e3a13650802a9e3ca598dc724bac205c2de6e3c4d93,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cre
atedAt:1722254682265900463,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d217b1229e1c133c37ba176a7bd91ea4ed0d0d4bda1dd88565332df357407d24,PodSandboxId:66da728d5e7b04693d0cf4161ac5178228bae4d7034a36b3f27ef72407bed429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172225468109690442
3,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kr89x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef,},Annotations:map[string]string{io.kubernetes.container.hash: 792af7aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55022b7395a488f0fd588d0653108db346a81bcae1f44db3b2d05be8712a4bdf,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254680744098489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5341ae2d216014521265ad07071eefe3458dfc8c304669e6ea8cb58ca3e824,PodSandboxId:bb587bf7205caa4e53f071e84638666a0b5a0bab0c1b8bbb44fd1496661030b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254678887607358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fp2hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02cf9a19-5834-400f-a520-406afe4dba9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8022a85f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df940fb1a7f53edf98b0e5f14080f07a0a8dd980d3700f18a712e565cec5b591,PodSandboxId:75b5841d4238f49f8c1f520af3c49be593e03822d3d694ba9f778f01568b7c9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254655845696728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112922532fc2751ee1435086e09f044d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390f0d98bf88b606b99570e9b443a5d4b2c3274a3e2194b7631381ac9de814a0,PodSandboxId:65699b4b1d01d94c4b79ef3fb21a8ee1640942ba388402f8b8a38a6eaaa03c72,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254655838597899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f8b3510f968e4c13bd7a9e85352278,},Annotations:map[string]string{io.kubernetes.container.hash: 92057ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cd83442f243c5e0d4a116a9feadb062cd45580e09370ba70ecc21fec28b1f4,PodSandboxId:36fa1f60dada38d120f0aae168be4757feeca72e805f5b7dfa94c16065ac0df2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254655817416641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6ea7ff5086f4730a2c156e8bde3484,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c99c28f8c244cfea9a64565c9b329d18dfd498d40892f1cc76609af13ccf52,PodSandboxId:19b45a3ca0106486aa547c23f93caf6928b6b3f91f18945ebed9c7c2deb97604,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb
2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254655751083325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbda3c748cc325186b62b37a762002d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29d8ba8a-899d-4671-b3eb-5d6fc8314333 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.319583248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92578540-4967-42b1-aceb-db1ad9a2d6d3 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.319679782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92578540-4967-42b1-aceb-db1ad9a2d6d3 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.320512677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03756e18-dd83-4dc5-8832-af590411e2e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.322069598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255170322040870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03756e18-dd83-4dc5-8832-af590411e2e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.322629748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=246d205a-a7c2-4872-9a54-a47f5e5b1f5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.322701422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=246d205a-a7c2-4872-9a54-a47f5e5b1f5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:12:50 addons-631322 crio[681]: time="2024-07-29 12:12:50.323184465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeb506ebad23e579c32a34b6ed74d617de715334a823707fa8892e5be989d06d,PodSandboxId:cb02b74aaacead2f0ef1f84e55d0f9e3060215739375a80054c3faac212ef987,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722255045876457704,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-gks46,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b899b274-8d19-46a4-8c01-0d036b3673f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9a3a1,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4288c06debbccbf482e986922ea83a69b419795eb7beda8b3580a275e03d93,PodSandboxId:f4a1507010891a551569b6c1429734d6a3823d566eacf4922322cb84bc810b7d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722254940900728668,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f7h5m,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ab0fd630-85fb-4a20-9d29-abe07d251a64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 79624eb1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9528646d04ce9dea650837a283758fbaa7fcb6310253454fb7481f2ca1b76f1,PodSandboxId:7e4a70225bfc98bf3d97c98460af934b0a9605e5283721c520c562ebe38f584d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722254904506567824,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5920ddfd-ff15-402c-bf7c-8b1f9591b455,},Annotations:map[string]string{io.kubernetes.container.hash: 5fe8171,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9dc9fcabacb6f7c1374f2a780ead1dc62796f077374f45e61565f05921326fb,PodSandboxId:2826518d0876d160a2c9dc207c5bd2474dcd7c6909372b026349bc649d21eda0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722254843087594326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kuberne
tes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a129df2-ed1b-450f-8973-43601739163b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ec12179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886,PodSandboxId:a776831ecd944111f46251bf0771aec3efd83ec07b0e08b901ff973ef601655b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722254717898909555,Labels:map[string]string{io.kubernetes.container.name: metrics-se
rver,io.kubernetes.pod.name: metrics-server-c59844bb4-5ckgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 635ee934-5845-4b41-b592-e16cd7ca050a,},Annotations:map[string]string{io.kubernetes.container.hash: daec34bb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce0435c87268ba17b265e3a13650802a9e3ca598dc724bac205c2de6e3c4d93,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cre
atedAt:1722254682265900463,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d217b1229e1c133c37ba176a7bd91ea4ed0d0d4bda1dd88565332df357407d24,PodSandboxId:66da728d5e7b04693d0cf4161ac5178228bae4d7034a36b3f27ef72407bed429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172225468109690442
3,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kr89x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cdbe5a-cad2-4f9c-aff4-30e73d68eeef,},Annotations:map[string]string{io.kubernetes.container.hash: 792af7aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55022b7395a488f0fd588d0653108db346a81bcae1f44db3b2d05be8712a4bdf,PodSandboxId:4ecb5de2617f17ac3de409581b07238a2e54e02cd1495998f49b99b9a1827286,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254680744098489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1db38ec0-1c47-4390-9f77-8348dbc84682,},Annotations:map[string]string{io.kubernetes.container.hash: c8fa539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5341ae2d216014521265ad07071eefe3458dfc8c304669e6ea8cb58ca3e824,PodSandboxId:bb587bf7205caa4e53f071e84638666a0b5a0bab0c1b8bbb44fd1496661030b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254678887607358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fp2hh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02cf9a19-5834-400f-a520-406afe4dba9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8022a85f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df940fb1a7f53edf98b0e5f14080f07a0a8dd980d3700f18a712e565cec5b591,PodSandboxId:75b5841d4238f49f8c1f520af3c49be593e03822d3d694ba9f778f01568b7c9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254655845696728,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112922532fc2751ee1435086e09f044d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390f0d98bf88b606b99570e9b443a5d4b2c3274a3e2194b7631381ac9de814a0,PodSandboxId:65699b4b1d01d94c4b79ef3fb21a8ee1640942ba388402f8b8a38a6eaaa03c72,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254655838597899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f8b3510f968e4c13bd7a9e85352278,},Annotations:map[string]string{io.kubernetes.container.hash: 92057ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cd83442f243c5e0d4a116a9feadb062cd45580e09370ba70ecc21fec28b1f4,PodSandboxId:36fa1f60dada38d120f0aae168be4757feeca72e805f5b7dfa94c16065ac0df2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254655817416641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6ea7ff5086f4730a2c156e8bde3484,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c99c28f8c244cfea9a64565c9b329d18dfd498d40892f1cc76609af13ccf52,PodSandboxId:19b45a3ca0106486aa547c23f93caf6928b6b3f91f18945ebed9c7c2deb97604,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb
2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254655751083325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbda3c748cc325186b62b37a762002d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=246d205a-a7c2-4872-9a54-a47f5e5b1f5a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aeb506ebad23e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   cb02b74aaacea       hello-world-app-6778b5fc9f-gks46
	ee4288c06debb       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   3 minutes ago       Running             headlamp                  0                   f4a1507010891       headlamp-7867546754-f7h5m
	f9528646d04ce       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   7e4a70225bfc9       nginx
	a9dc9fcabacb6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   2826518d0876d       busybox
	0f5644a7a58fd       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   a776831ecd944       metrics-server-c59844bb4-5ckgn
	cce0435c87268       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       1                   4ecb5de2617f1       storage-provisioner
	d217b1229e1c1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   66da728d5e7b0       coredns-7db6d8ff4d-kr89x
	55022b7395a48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Exited              storage-provisioner       0                   4ecb5de2617f1       storage-provisioner
	8c5341ae2d216       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        8 minutes ago       Running             kube-proxy                0                   bb587bf7205ca       kube-proxy-fp2hh
	df940fb1a7f53       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        8 minutes ago       Running             kube-controller-manager   0                   75b5841d4238f       kube-controller-manager-addons-631322
	390f0d98bf88b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   65699b4b1d01d       etcd-addons-631322
	15cd83442f243       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        8 minutes ago       Running             kube-apiserver            0                   36fa1f60dada3       kube-apiserver-addons-631322
	93c99c28f8c24       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        8 minutes ago       Running             kube-scheduler            0                   19b45a3ca0106       kube-scheduler-addons-631322
	
	
	==> coredns [d217b1229e1c133c37ba176a7bd91ea4ed0d0d4bda1dd88565332df357407d24] <==
	[INFO] 10.244.0.8:38022 - 8040 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00008526s
	[INFO] 10.244.0.8:53962 - 17306 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058472s
	[INFO] 10.244.0.8:53962 - 6548 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000063898s
	[INFO] 10.244.0.8:33416 - 4338 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010187s
	[INFO] 10.244.0.8:33416 - 30960 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061915s
	[INFO] 10.244.0.8:51045 - 64265 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008912s
	[INFO] 10.244.0.8:51045 - 17674 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000057691s
	[INFO] 10.244.0.8:45862 - 44362 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137005s
	[INFO] 10.244.0.8:45862 - 61519 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000028329s
	[INFO] 10.244.0.8:40403 - 6616 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054623s
	[INFO] 10.244.0.8:40403 - 990 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037149s
	[INFO] 10.244.0.8:59377 - 30614 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052023s
	[INFO] 10.244.0.8:59377 - 41623 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000022872s
	[INFO] 10.244.0.8:47916 - 10964 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049728s
	[INFO] 10.244.0.8:47916 - 34775 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000035977s
	[INFO] 10.244.0.22:48402 - 57253 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000633417s
	[INFO] 10.244.0.22:39577 - 50268 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000661586s
	[INFO] 10.244.0.22:42292 - 26778 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001334s
	[INFO] 10.244.0.22:56873 - 9097 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191236s
	[INFO] 10.244.0.22:58338 - 6010 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000725s
	[INFO] 10.244.0.22:49619 - 13408 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064622s
	[INFO] 10.244.0.22:58157 - 47120 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000825776s
	[INFO] 10.244.0.22:48918 - 61652 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0009465s
	[INFO] 10.244.0.25:52737 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000283391s
	[INFO] 10.244.0.25:37171 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000136457s
	
	
	==> describe nodes <==
	Name:               addons-631322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-631322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=addons-631322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_04_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-631322
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:04:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-631322
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:12:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:11:00 +0000   Mon, 29 Jul 2024 12:04:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:11:00 +0000   Mon, 29 Jul 2024 12:04:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:11:00 +0000   Mon, 29 Jul 2024 12:04:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:11:00 +0000   Mon, 29 Jul 2024 12:04:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    addons-631322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 dbfd3884a4b246a2a72c3d23bb089cf3
	  System UUID:                dbfd3884-a4b2-46a2-a72c-3d23bb089cf3
	  Boot ID:                    7ae55269-1cce-42ab-9e04-3fd98ff87fed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  default                     hello-world-app-6778b5fc9f-gks46         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  headlamp                    headlamp-7867546754-f7h5m                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 coredns-7db6d8ff4d-kr89x                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m15s
	  kube-system                 etcd-addons-631322                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m29s
	  kube-system                 kube-apiserver-addons-631322             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 kube-controller-manager-addons-631322    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 kube-proxy-fp2hh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-scheduler-addons-631322             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 metrics-server-c59844bb4-5ckgn           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         8m11s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m9s   kube-proxy       
	  Normal  Starting                 8m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m29s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m29s  kubelet          Node addons-631322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m29s  kubelet          Node addons-631322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m29s  kubelet          Node addons-631322 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m28s  kubelet          Node addons-631322 status is now: NodeReady
	  Normal  RegisteredNode           8m16s  node-controller  Node addons-631322 event: Registered Node addons-631322 in Controller
	
	
	==> dmesg <==
	[  +5.042291] kauditd_printk_skb: 168 callbacks suppressed
	[  +6.463948] kauditd_printk_skb: 84 callbacks suppressed
	[Jul29 12:05] kauditd_printk_skb: 5 callbacks suppressed
	[ +14.677313] kauditd_printk_skb: 4 callbacks suppressed
	[ +33.459945] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.028638] kauditd_printk_skb: 55 callbacks suppressed
	[Jul29 12:06] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.112027] kauditd_printk_skb: 17 callbacks suppressed
	[ +45.192936] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 12:07] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.108107] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.823976] kauditd_printk_skb: 9 callbacks suppressed
	[ +18.316332] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.275601] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.706141] kauditd_printk_skb: 39 callbacks suppressed
	[Jul29 12:08] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.255238] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.048850] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.050065] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.312562] kauditd_printk_skb: 8 callbacks suppressed
	[ +13.277376] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.020295] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.880183] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.193694] kauditd_printk_skb: 16 callbacks suppressed
	[Jul29 12:10] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [390f0d98bf88b606b99570e9b443a5d4b2c3274a3e2194b7631381ac9de814a0] <==
	{"level":"warn","ts":"2024-07-29T12:07:06.544173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.828352ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4365"}
	{"level":"info","ts":"2024-07-29T12:07:06.544223Z","caller":"traceutil/trace.go:171","msg":"trace[1191993287] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1252; }","duration":"245.900821ms","start":"2024-07-29T12:07:06.298314Z","end":"2024-07-29T12:07:06.544215Z","steps":["trace[1191993287] 'agreement among raft nodes before linearized reading'  (duration: 245.790183ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:07:06.544413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.640768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T12:07:06.544461Z","caller":"traceutil/trace.go:171","msg":"trace[1130243268] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1252; }","duration":"115.707354ms","start":"2024-07-29T12:07:06.428744Z","end":"2024-07-29T12:07:06.544451Z","steps":["trace[1130243268] 'agreement among raft nodes before linearized reading'  (duration: 115.644327ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:07:08.750064Z","caller":"traceutil/trace.go:171","msg":"trace[927867220] linearizableReadLoop","detail":"{readStateIndex:1304; appliedIndex:1303; }","duration":"198.50366ms","start":"2024-07-29T12:07:08.551537Z","end":"2024-07-29T12:07:08.750041Z","steps":["trace[927867220] 'read index received'  (duration: 198.340512ms)","trace[927867220] 'applied index is now lower than readState.Index'  (duration: 162.449µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:07:08.750157Z","caller":"traceutil/trace.go:171","msg":"trace[696740883] transaction","detail":"{read_only:false; response_revision:1255; number_of_response:1; }","duration":"207.210253ms","start":"2024-07-29T12:07:08.54294Z","end":"2024-07-29T12:07:08.750151Z","steps":["trace[696740883] 'process raft request'  (duration: 206.993535ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:07:08.75051Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.921978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-29T12:07:08.750555Z","caller":"traceutil/trace.go:171","msg":"trace[597650258] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1255; }","duration":"199.041685ms","start":"2024-07-29T12:07:08.551505Z","end":"2024-07-29T12:07:08.750547Z","steps":["trace[597650258] 'agreement among raft nodes before linearized reading'  (duration: 198.899705ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:07:09.032969Z","caller":"traceutil/trace.go:171","msg":"trace[1476798487] linearizableReadLoop","detail":"{readStateIndex:1305; appliedIndex:1304; }","duration":"234.920667ms","start":"2024-07-29T12:07:08.798033Z","end":"2024-07-29T12:07:09.032953Z","steps":["trace[1476798487] 'read index received'  (duration: 230.43909ms)","trace[1476798487] 'applied index is now lower than readState.Index'  (duration: 4.480852ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:07:09.033469Z","caller":"traceutil/trace.go:171","msg":"trace[1748490560] transaction","detail":"{read_only:false; response_revision:1256; number_of_response:1; }","duration":"277.754772ms","start":"2024-07-29T12:07:08.755702Z","end":"2024-07-29T12:07:09.033457Z","steps":["trace[1748490560] 'process raft request'  (duration: 272.840422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:07:09.03451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.46247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4365"}
	{"level":"info","ts":"2024-07-29T12:07:09.037925Z","caller":"traceutil/trace.go:171","msg":"trace[162234053] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1256; }","duration":"239.905325ms","start":"2024-07-29T12:07:08.798009Z","end":"2024-07-29T12:07:09.037914Z","steps":["trace[162234053] 'agreement among raft nodes before linearized reading'  (duration: 236.415614ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:08:02.428973Z","caller":"traceutil/trace.go:171","msg":"trace[2143638598] linearizableReadLoop","detail":"{readStateIndex:1628; appliedIndex:1627; }","duration":"184.268511ms","start":"2024-07-29T12:08:02.24462Z","end":"2024-07-29T12:08:02.428888Z","steps":["trace[2143638598] 'read index received'  (duration: 184.043907ms)","trace[2143638598] 'applied index is now lower than readState.Index'  (duration: 223.935µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:08:02.429613Z","caller":"traceutil/trace.go:171","msg":"trace[1148497118] transaction","detail":"{read_only:false; response_revision:1559; number_of_response:1; }","duration":"458.297332ms","start":"2024-07-29T12:08:01.971304Z","end":"2024-07-29T12:08:02.429601Z","steps":["trace[1148497118] 'process raft request'  (duration: 457.390154ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:08:02.431922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T12:08:01.971293Z","time spent":"460.379637ms","remote":"127.0.0.1:60164","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1987,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/namespaces/gadget\" mod_revision:1490 > success:<request_put:<key:\"/registry/namespaces/gadget\" value_size:1952 >> failure:<request_range:<key:\"/registry/namespaces/gadget\" > >"}
	{"level":"warn","ts":"2024-07-29T12:08:02.430331Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.682264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11013"}
	{"level":"info","ts":"2024-07-29T12:08:02.432853Z","caller":"traceutil/trace.go:171","msg":"trace[874712766] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1559; }","duration":"188.236637ms","start":"2024-07-29T12:08:02.244594Z","end":"2024-07-29T12:08:02.43283Z","steps":["trace[874712766] 'agreement among raft nodes before linearized reading'  (duration: 185.629326ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:09:00.56862Z","caller":"traceutil/trace.go:171","msg":"trace[279079184] linearizableReadLoop","detail":"{readStateIndex:2051; appliedIndex:2050; }","duration":"141.066707ms","start":"2024-07-29T12:09:00.42752Z","end":"2024-07-29T12:09:00.568587Z","steps":["trace[279079184] 'read index received'  (duration: 140.991583ms)","trace[279079184] 'applied index is now lower than readState.Index'  (duration: 74.258µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:09:00.56878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.232663ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T12:09:00.568853Z","caller":"traceutil/trace.go:171","msg":"trace[1854130269] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1964; }","duration":"141.351652ms","start":"2024-07-29T12:09:00.427493Z","end":"2024-07-29T12:09:00.568844Z","steps":["trace[1854130269] 'agreement among raft nodes before linearized reading'  (duration: 141.227318ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:09:00.568682Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T12:09:00.222052Z","time spent":"346.618783ms","remote":"127.0.0.1:60072","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-07-29T12:09:00.799875Z","caller":"traceutil/trace.go:171","msg":"trace[1991790229] transaction","detail":"{read_only:false; response_revision:1965; number_of_response:1; }","duration":"229.480947ms","start":"2024-07-29T12:09:00.57019Z","end":"2024-07-29T12:09:00.799671Z","steps":["trace[1991790229] 'process raft request'  (duration: 228.418258ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:09:00.800215Z","caller":"traceutil/trace.go:171","msg":"trace[621356182] linearizableReadLoop","detail":"{readStateIndex:2052; appliedIndex:2051; }","duration":"131.618464ms","start":"2024-07-29T12:09:00.667974Z","end":"2024-07-29T12:09:00.799592Z","steps":["trace[621356182] 'read index received'  (duration: 130.57059ms)","trace[621356182] 'applied index is now lower than readState.Index'  (duration: 1.047403ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:09:00.800745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.772177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3607"}
	{"level":"info","ts":"2024-07-29T12:09:00.800776Z","caller":"traceutil/trace.go:171","msg":"trace[1494508932] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1965; }","duration":"132.81481ms","start":"2024-07-29T12:09:00.667952Z","end":"2024-07-29T12:09:00.800767Z","steps":["trace[1494508932] 'agreement among raft nodes before linearized reading'  (duration: 132.638027ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:12:50 up 9 min,  0 users,  load average: 0.22, 0.74, 0.53
	Linux addons-631322 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [15cd83442f243c5e0d4a116a9feadb062cd45580e09370ba70ecc21fec28b1f4] <==
	W0729 12:07:57.842838       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 12:08:09.306026       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 12:08:19.164243       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 12:08:19.409905       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.127.103"}
	E0729 12:08:31.062592       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0729 12:08:45.974061       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 12:08:45.974118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 12:08:46.034128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 12:08:46.034185       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 12:08:46.046036       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 12:08:46.046089       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 12:08:46.057167       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 12:08:46.060867       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 12:08:46.121506       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 12:08:46.121558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0729 12:08:47.046722       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 12:08:47.122433       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0729 12:08:47.122540       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0729 12:08:52.572768       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.33.154"}
	I0729 12:09:00.802153       1 trace.go:236] Trace[371396522]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.55,type:*v1.Endpoints,resource:apiServerIPInfo (29-Jul-2024 12:09:00.220) (total time: 581ms):
	Trace[371396522]: ---"Transaction prepared" 348ms (12:09:00.569)
	Trace[371396522]: ---"Txn call completed" 232ms (12:09:00.802)
	Trace[371396522]: [581.910767ms] [581.910767ms] END
	I0729 12:10:42.196678       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.237.23"}
	E0729 12:10:44.173733       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [df940fb1a7f53edf98b0e5f14080f07a0a8dd980d3700f18a712e565cec5b591] <==
	I0729 12:10:44.064223       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0729 12:10:46.200277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="10.65943ms"
	I0729 12:10:46.200776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="97.54µs"
	W0729 12:10:48.625600       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:10:48.625739       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 12:10:54.127927       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0729 12:11:16.672224       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:11:16.672342       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:11:22.176247       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:11:22.176314       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:11:26.533987       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:11:26.534032       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:11:32.358728       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:11:32.359016       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:12:03.769073       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:12:03.769165       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:12:05.309835       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:12:05.309866       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:12:09.941529       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:12:09.941638       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:12:17.154969       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:12:17.155022       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 12:12:48.234106       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 12:12:48.234186       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 12:12:49.303647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="16.01µs"
	
	
	==> kube-proxy [8c5341ae2d216014521265ad07071eefe3458dfc8c304669e6ea8cb58ca3e824] <==
	I0729 12:04:40.600353       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:04:40.627495       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.55"]
	I0729 12:04:40.790265       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:04:40.790323       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:04:40.790341       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:04:40.798179       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:04:40.798369       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:04:40.798398       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:04:40.800633       1 config.go:192] "Starting service config controller"
	I0729 12:04:40.800659       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:04:40.800720       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:04:40.800727       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:04:40.801143       1 config.go:319] "Starting node config controller"
	I0729 12:04:40.801150       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:04:40.901758       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:04:40.901844       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:04:40.901865       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [93c99c28f8c244cfea9a64565c9b329d18dfd498d40892f1cc76609af13ccf52] <==
	W0729 12:04:18.612576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:04:18.613323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:04:18.612695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:04:18.613374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:04:18.612751       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 12:04:18.613422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 12:04:18.612849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 12:04:18.613470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 12:04:18.607462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:04:18.613524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:04:18.615208       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:04:18.615262       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:04:19.495529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:04:19.495629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:04:19.641980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:04:19.643228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:04:19.704090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:04:19.704187       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:04:19.724202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 12:04:19.724383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:04:19.747423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 12:04:19.747509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 12:04:20.037328       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:04:20.037454       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 12:04:22.798275       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:10:57 addons-631322 kubelet[1277]: I0729 12:10:57.027298    1277 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 12:11:21 addons-631322 kubelet[1277]: E0729 12:11:21.060341    1277 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:11:21 addons-631322 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:11:21 addons-631322 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:11:21 addons-631322 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:11:21 addons-631322 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:11:21 addons-631322 kubelet[1277]: I0729 12:11:21.499151    1277 scope.go:117] "RemoveContainer" containerID="dad07f3df11b626302014e6609f2371a7e6068abe18e3bca287745a46c571e1f"
	Jul 29 12:11:21 addons-631322 kubelet[1277]: I0729 12:11:21.530281    1277 scope.go:117] "RemoveContainer" containerID="978edbef1d26364b5710a9f3a37efb4e1fe94cf23436f2bfd04c4af1ff13e17a"
	Jul 29 12:12:21 addons-631322 kubelet[1277]: E0729 12:12:21.059082    1277 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:12:21 addons-631322 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:12:21 addons-631322 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:12:21 addons-631322 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:12:21 addons-631322 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:12:25 addons-631322 kubelet[1277]: I0729 12:12:25.027463    1277 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 12:12:49 addons-631322 kubelet[1277]: I0729 12:12:49.332387    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-gks46" podStartSLOduration=124.056284461 podStartE2EDuration="2m7.332357326s" podCreationTimestamp="2024-07-29 12:10:42 +0000 UTC" firstStartedPulling="2024-07-29 12:10:42.586583913 +0000 UTC m=+381.707256177" lastFinishedPulling="2024-07-29 12:10:45.862656777 +0000 UTC m=+384.983329042" observedRunningTime="2024-07-29 12:10:46.190660973 +0000 UTC m=+385.311333251" watchObservedRunningTime="2024-07-29 12:12:49.332357326 +0000 UTC m=+508.453029606"
	Jul 29 12:12:50 addons-631322 kubelet[1277]: I0729 12:12:50.705359    1277 scope.go:117] "RemoveContainer" containerID="0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886"
	Jul 29 12:12:50 addons-631322 kubelet[1277]: I0729 12:12:50.745411    1277 scope.go:117] "RemoveContainer" containerID="0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886"
	Jul 29 12:12:50 addons-631322 kubelet[1277]: E0729 12:12:50.747006    1277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886\": container with ID starting with 0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886 not found: ID does not exist" containerID="0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886"
	Jul 29 12:12:50 addons-631322 kubelet[1277]: I0729 12:12:50.747133    1277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886"} err="failed to get container status \"0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886\": rpc error: code = NotFound desc = could not find container \"0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886\": container with ID starting with 0f5644a7a58fd417a4068d9413d6fad0b034d86a575ccf03e0bdef724c26d886 not found: ID does not exist"
	Jul 29 12:12:50 addons-631322 kubelet[1277]: I0729 12:12:50.854103    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/635ee934-5845-4b41-b592-e16cd7ca050a-tmp-dir\") pod \"635ee934-5845-4b41-b592-e16cd7ca050a\" (UID: \"635ee934-5845-4b41-b592-e16cd7ca050a\") "
	Jul 29 12:12:50 addons-631322 kubelet[1277]: I0729 12:12:50.854176    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgqzz\" (UniqueName: \"kubernetes.io/projected/635ee934-5845-4b41-b592-e16cd7ca050a-kube-api-access-qgqzz\") pod \"635ee934-5845-4b41-b592-e16cd7ca050a\" (UID: \"635ee934-5845-4b41-b592-e16cd7ca050a\") "
	Jul 29 12:12:50 addons-631322 kubelet[1277]: I0729 12:12:50.854865    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/635ee934-5845-4b41-b592-e16cd7ca050a-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "635ee934-5845-4b41-b592-e16cd7ca050a" (UID: "635ee934-5845-4b41-b592-e16cd7ca050a"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 29 12:12:50 addons-631322 kubelet[1277]: I0729 12:12:50.862295    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/635ee934-5845-4b41-b592-e16cd7ca050a-kube-api-access-qgqzz" (OuterVolumeSpecName: "kube-api-access-qgqzz") pod "635ee934-5845-4b41-b592-e16cd7ca050a" (UID: "635ee934-5845-4b41-b592-e16cd7ca050a"). InnerVolumeSpecName "kube-api-access-qgqzz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 12:12:50 addons-631322 kubelet[1277]: I0729 12:12:50.954988    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qgqzz\" (UniqueName: \"kubernetes.io/projected/635ee934-5845-4b41-b592-e16cd7ca050a-kube-api-access-qgqzz\") on node \"addons-631322\" DevicePath \"\""
	Jul 29 12:12:50 addons-631322 kubelet[1277]: I0729 12:12:50.955036    1277 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/635ee934-5845-4b41-b592-e16cd7ca050a-tmp-dir\") on node \"addons-631322\" DevicePath \"\""
	
	
	==> storage-provisioner [55022b7395a488f0fd588d0653108db346a81bcae1f44db3b2d05be8712a4bdf] <==
	I0729 12:04:41.477530       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 12:04:41.484382       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [cce0435c87268ba17b265e3a13650802a9e3ca598dc724bac205c2de6e3c4d93] <==
	I0729 12:04:43.666278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 12:04:43.737051       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 12:04:43.737113       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 12:04:43.750752       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 12:04:43.751078       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-631322_6d727feb-7e6c-4e68-b6d9-3105753fd048!
	I0729 12:04:43.752642       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68d98cb4-4cee-489c-b2b2-baea37fcbb34", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-631322_6d727feb-7e6c-4e68-b6d9-3105753fd048 became leader
	I0729 12:04:43.852129       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-631322_6d727feb-7e6c-4e68-b6d9-3105753fd048!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-631322 -n addons-631322
helpers_test.go:261: (dbg) Run:  kubectl --context addons-631322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (313.63s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-631322
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-631322: exit status 82 (2m0.463437346s)

                                                
                                                
-- stdout --
	* Stopping node "addons-631322"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-631322" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-631322
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-631322: exit status 11 (21.618962882s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-631322" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-631322
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-631322: exit status 11 (6.147377495s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-631322" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-631322
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-631322: exit status 11 (6.139782753s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-631322" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (684.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-767488 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-767488 -v=7 --alsologtostderr
E0729 12:27:11.724690  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:27:18.312999  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-767488 -v=7 --alsologtostderr: exit status 82 (2m1.876077133s)

                                                
                                                
-- stdout --
	* Stopping node "ha-767488-m04"  ...
	* Stopping node "ha-767488-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:26:27.332433  256727 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:26:27.332559  256727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:26:27.332568  256727 out.go:304] Setting ErrFile to fd 2...
	I0729 12:26:27.332571  256727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:26:27.332760  256727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:26:27.333036  256727 out.go:298] Setting JSON to false
	I0729 12:26:27.333111  256727 mustload.go:65] Loading cluster: ha-767488
	I0729 12:26:27.333471  256727 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:26:27.333562  256727 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:26:27.333732  256727 mustload.go:65] Loading cluster: ha-767488
	I0729 12:26:27.333908  256727 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:26:27.333965  256727 stop.go:39] StopHost: ha-767488-m04
	I0729 12:26:27.334390  256727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:26:27.334435  256727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:26:27.349529  256727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0729 12:26:27.350040  256727 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:26:27.350611  256727 main.go:141] libmachine: Using API Version  1
	I0729 12:26:27.350644  256727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:26:27.350998  256727 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:26:27.353376  256727 out.go:177] * Stopping node "ha-767488-m04"  ...
	I0729 12:26:27.354820  256727 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 12:26:27.354852  256727 main.go:141] libmachine: (ha-767488-m04) Calling .DriverName
	I0729 12:26:27.355072  256727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 12:26:27.355098  256727 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHHostname
	I0729 12:26:27.358069  256727 main.go:141] libmachine: (ha-767488-m04) DBG | domain ha-767488-m04 has defined MAC address 52:54:00:d8:66:33 in network mk-ha-767488
	I0729 12:26:27.358475  256727 main.go:141] libmachine: (ha-767488-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:66:33", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:24:37 +0000 UTC Type:0 Mac:52:54:00:d8:66:33 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:ha-767488-m04 Clientid:01:52:54:00:d8:66:33}
	I0729 12:26:27.358504  256727 main.go:141] libmachine: (ha-767488-m04) DBG | domain ha-767488-m04 has defined IP address 192.168.39.181 and MAC address 52:54:00:d8:66:33 in network mk-ha-767488
	I0729 12:26:27.358697  256727 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHPort
	I0729 12:26:27.358883  256727 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHKeyPath
	I0729 12:26:27.359030  256727 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHUsername
	I0729 12:26:27.359187  256727 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488-m04/id_rsa Username:docker}
	I0729 12:26:27.445661  256727 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 12:26:27.501867  256727 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 12:26:27.556828  256727 main.go:141] libmachine: Stopping "ha-767488-m04"...
	I0729 12:26:27.556869  256727 main.go:141] libmachine: (ha-767488-m04) Calling .GetState
	I0729 12:26:27.558519  256727 main.go:141] libmachine: (ha-767488-m04) Calling .Stop
	I0729 12:26:27.562129  256727 main.go:141] libmachine: (ha-767488-m04) Waiting for machine to stop 0/120
	I0729 12:26:28.745481  256727 main.go:141] libmachine: (ha-767488-m04) Calling .GetState
	I0729 12:26:28.746847  256727 main.go:141] libmachine: Machine "ha-767488-m04" was stopped.
	I0729 12:26:28.746870  256727 stop.go:75] duration metric: took 1.392052024s to stop
	I0729 12:26:28.746893  256727 stop.go:39] StopHost: ha-767488-m03
	I0729 12:26:28.747303  256727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:26:28.747347  256727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:26:28.763096  256727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I0729 12:26:28.763520  256727 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:26:28.764013  256727 main.go:141] libmachine: Using API Version  1
	I0729 12:26:28.764039  256727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:26:28.764354  256727 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:26:28.767179  256727 out.go:177] * Stopping node "ha-767488-m03"  ...
	I0729 12:26:28.768394  256727 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 12:26:28.768419  256727 main.go:141] libmachine: (ha-767488-m03) Calling .DriverName
	I0729 12:26:28.768643  256727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 12:26:28.768667  256727 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHHostname
	I0729 12:26:28.771489  256727 main.go:141] libmachine: (ha-767488-m03) DBG | domain ha-767488-m03 has defined MAC address 52:54:00:05:1f:d0 in network mk-ha-767488
	I0729 12:26:28.771903  256727 main.go:141] libmachine: (ha-767488-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:1f:d0", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:23:09 +0000 UTC Type:0 Mac:52:54:00:05:1f:d0 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-767488-m03 Clientid:01:52:54:00:05:1f:d0}
	I0729 12:26:28.771931  256727 main.go:141] libmachine: (ha-767488-m03) DBG | domain ha-767488-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:05:1f:d0 in network mk-ha-767488
	I0729 12:26:28.772075  256727 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHPort
	I0729 12:26:28.772233  256727 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHKeyPath
	I0729 12:26:28.772382  256727 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHUsername
	I0729 12:26:28.772509  256727 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488-m03/id_rsa Username:docker}
	I0729 12:26:28.856986  256727 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 12:26:28.910969  256727 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 12:26:28.965868  256727 main.go:141] libmachine: Stopping "ha-767488-m03"...
	I0729 12:26:28.965898  256727 main.go:141] libmachine: (ha-767488-m03) Calling .GetState
	I0729 12:26:28.967317  256727 main.go:141] libmachine: (ha-767488-m03) Calling .Stop
	I0729 12:26:28.970594  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 0/120
	I0729 12:26:29.972680  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 1/120
	I0729 12:26:30.973977  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 2/120
	I0729 12:26:31.975309  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 3/120
	I0729 12:26:32.976758  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 4/120
	I0729 12:26:33.979004  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 5/120
	I0729 12:26:34.980468  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 6/120
	I0729 12:26:35.981818  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 7/120
	I0729 12:26:36.983450  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 8/120
	I0729 12:26:37.984731  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 9/120
	I0729 12:26:38.986705  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 10/120
	I0729 12:26:39.988030  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 11/120
	I0729 12:26:40.989421  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 12/120
	I0729 12:26:41.991212  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 13/120
	I0729 12:26:42.992624  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 14/120
	I0729 12:26:43.994008  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 15/120
	I0729 12:26:44.995401  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 16/120
	I0729 12:26:45.997074  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 17/120
	I0729 12:26:46.998492  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 18/120
	I0729 12:26:47.999845  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 19/120
	I0729 12:26:49.001302  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 20/120
	I0729 12:26:50.002793  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 21/120
	I0729 12:26:51.004150  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 22/120
	I0729 12:26:52.005477  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 23/120
	I0729 12:26:53.007018  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 24/120
	I0729 12:26:54.008670  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 25/120
	I0729 12:26:55.010316  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 26/120
	I0729 12:26:56.012219  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 27/120
	I0729 12:26:57.013474  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 28/120
	I0729 12:26:58.014738  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 29/120
	I0729 12:26:59.016649  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 30/120
	I0729 12:27:00.018296  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 31/120
	I0729 12:27:01.019920  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 32/120
	I0729 12:27:02.021338  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 33/120
	I0729 12:27:03.022604  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 34/120
	I0729 12:27:04.024753  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 35/120
	I0729 12:27:05.026177  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 36/120
	I0729 12:27:06.028516  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 37/120
	I0729 12:27:07.029927  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 38/120
	I0729 12:27:08.031746  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 39/120
	I0729 12:27:09.033153  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 40/120
	I0729 12:27:10.035359  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 41/120
	I0729 12:27:11.036830  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 42/120
	I0729 12:27:12.038051  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 43/120
	I0729 12:27:13.039366  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 44/120
	I0729 12:27:14.041683  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 45/120
	I0729 12:27:15.043285  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 46/120
	I0729 12:27:16.045070  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 47/120
	I0729 12:27:17.047327  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 48/120
	I0729 12:27:18.048588  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 49/120
	I0729 12:27:19.050053  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 50/120
	I0729 12:27:20.051447  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 51/120
	I0729 12:27:21.052960  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 52/120
	I0729 12:27:22.055230  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 53/120
	I0729 12:27:23.056837  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 54/120
	I0729 12:27:24.058677  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 55/120
	I0729 12:27:25.060073  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 56/120
	I0729 12:27:26.061276  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 57/120
	I0729 12:27:27.062562  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 58/120
	I0729 12:27:28.063950  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 59/120
	I0729 12:27:29.065594  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 60/120
	I0729 12:27:30.066917  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 61/120
	I0729 12:27:31.068175  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 62/120
	I0729 12:27:32.069473  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 63/120
	I0729 12:27:33.070667  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 64/120
	I0729 12:27:34.072347  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 65/120
	I0729 12:27:35.073737  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 66/120
	I0729 12:27:36.074865  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 67/120
	I0729 12:27:37.076214  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 68/120
	I0729 12:27:38.077552  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 69/120
	I0729 12:27:39.079118  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 70/120
	I0729 12:27:40.080318  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 71/120
	I0729 12:27:41.081544  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 72/120
	I0729 12:27:42.082857  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 73/120
	I0729 12:27:43.084167  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 74/120
	I0729 12:27:44.085882  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 75/120
	I0729 12:27:45.087236  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 76/120
	I0729 12:27:46.088876  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 77/120
	I0729 12:27:47.090316  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 78/120
	I0729 12:27:48.091707  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 79/120
	I0729 12:27:49.093124  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 80/120
	I0729 12:27:50.095447  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 81/120
	I0729 12:27:51.096936  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 82/120
	I0729 12:27:52.099295  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 83/120
	I0729 12:27:53.100549  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 84/120
	I0729 12:27:54.102220  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 85/120
	I0729 12:27:55.103578  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 86/120
	I0729 12:27:56.104974  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 87/120
	I0729 12:27:57.107247  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 88/120
	I0729 12:27:58.108535  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 89/120
	I0729 12:27:59.110098  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 90/120
	I0729 12:28:00.111948  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 91/120
	I0729 12:28:01.114155  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 92/120
	I0729 12:28:02.115439  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 93/120
	I0729 12:28:03.117450  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 94/120
	I0729 12:28:04.118749  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 95/120
	I0729 12:28:05.120095  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 96/120
	I0729 12:28:06.121563  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 97/120
	I0729 12:28:07.122942  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 98/120
	I0729 12:28:08.124276  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 99/120
	I0729 12:28:09.125580  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 100/120
	I0729 12:28:10.127084  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 101/120
	I0729 12:28:11.128605  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 102/120
	I0729 12:28:12.130062  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 103/120
	I0729 12:28:13.132096  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 104/120
	I0729 12:28:14.134640  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 105/120
	I0729 12:28:15.135936  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 106/120
	I0729 12:28:16.137465  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 107/120
	I0729 12:28:17.138726  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 108/120
	I0729 12:28:18.140131  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 109/120
	I0729 12:28:19.141614  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 110/120
	I0729 12:28:20.143350  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 111/120
	I0729 12:28:21.145637  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 112/120
	I0729 12:28:22.147419  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 113/120
	I0729 12:28:23.148600  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 114/120
	I0729 12:28:24.150153  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 115/120
	I0729 12:28:25.151781  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 116/120
	I0729 12:28:26.153255  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 117/120
	I0729 12:28:27.154584  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 118/120
	I0729 12:28:28.156767  256727 main.go:141] libmachine: (ha-767488-m03) Waiting for machine to stop 119/120
	I0729 12:28:29.158095  256727 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 12:28:29.158168  256727 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 12:28:29.160031  256727 out.go:177] 
	W0729 12:28:29.161272  256727 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 12:28:29.161288  256727 out.go:239] * 
	* 
	W0729 12:28:29.163788  256727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 12:28:29.165393  256727 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-767488 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-767488 --wait=true -v=7 --alsologtostderr
E0729 12:29:27.881252  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:29:55.565542  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:32:18.313318  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:33:41.359759  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:34:27.880952  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:37:18.313077  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-767488 --wait=true -v=7 --alsologtostderr: exit status 105 (9m19.931771108s)

                                                
                                                
-- stdout --
	* [ha-767488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-767488" primary control-plane node in "ha-767488" cluster
	* Updating the running kvm2 "ha-767488" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-767488-m02" control-plane node in "ha-767488" cluster
	* Updating the running kvm2 "ha-767488-m02" VM ...
	* Found network options:
	  - NO_PROXY=192.168.39.217
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.217
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:28:29.213184  257176 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:28:29.213435  257176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:28:29.213444  257176 out.go:304] Setting ErrFile to fd 2...
	I0729 12:28:29.213448  257176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:28:29.213604  257176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:28:29.214122  257176 out.go:298] Setting JSON to false
	I0729 12:28:29.215063  257176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7852,"bootTime":1722248257,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:28:29.215118  257176 start.go:139] virtualization: kvm guest
	I0729 12:28:29.217142  257176 out.go:177] * [ha-767488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:28:29.218351  257176 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:28:29.218358  257176 notify.go:220] Checking for updates...
	I0729 12:28:29.220405  257176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:28:29.221684  257176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:28:29.222900  257176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:28:29.224025  257176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:28:29.225157  257176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:28:29.226709  257176 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:28:29.226808  257176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:28:29.227211  257176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:28:29.227254  257176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:28:29.242929  257176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0729 12:28:29.243340  257176 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:28:29.243859  257176 main.go:141] libmachine: Using API Version  1
	I0729 12:28:29.243878  257176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:28:29.244194  257176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:28:29.244404  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.277920  257176 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:28:29.279142  257176 start.go:297] selected driver: kvm2
	I0729 12:28:29.279164  257176 start.go:901] validating driver "kvm2" against &{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:28:29.279323  257176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:28:29.279655  257176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:28:29.279742  257176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:28:29.294785  257176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:28:29.295450  257176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:28:29.295597  257176 cni.go:84] Creating CNI manager for ""
	I0729 12:28:29.295609  257176 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:28:29.295668  257176 start.go:340] cluster config:
	{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:28:29.295787  257176 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:28:29.297555  257176 out.go:177] * Starting "ha-767488" primary control-plane node in "ha-767488" cluster
	I0729 12:28:29.298735  257176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:28:29.298761  257176 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:28:29.298770  257176 cache.go:56] Caching tarball of preloaded images
	I0729 12:28:29.298837  257176 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:28:29.298847  257176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:28:29.298958  257176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:28:29.299164  257176 start.go:360] acquireMachinesLock for ha-767488: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:28:29.299217  257176 start.go:364] duration metric: took 29.143µs to acquireMachinesLock for "ha-767488"
	I0729 12:28:29.299236  257176 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:28:29.299241  257176 fix.go:54] fixHost starting: 
	I0729 12:28:29.299513  257176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:28:29.299545  257176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:28:29.313514  257176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
	I0729 12:28:29.313936  257176 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:28:29.314395  257176 main.go:141] libmachine: Using API Version  1
	I0729 12:28:29.314416  257176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:28:29.314828  257176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:28:29.315041  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.315199  257176 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:28:29.316538  257176 fix.go:112] recreateIfNeeded on ha-767488: state=Running err=<nil>
	W0729 12:28:29.316562  257176 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:28:29.318256  257176 out.go:177] * Updating the running kvm2 "ha-767488" VM ...
	I0729 12:28:29.319254  257176 machine.go:94] provisionDockerMachine start ...
	I0729 12:28:29.319272  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.319461  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.321717  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.322169  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.322198  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.322326  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.322496  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.322637  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.322767  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.322944  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.323131  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.323141  257176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:28:29.438235  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:28:29.438263  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.438523  257176 buildroot.go:166] provisioning hostname "ha-767488"
	I0729 12:28:29.438557  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.438793  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.441520  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.441975  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.442000  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.442119  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.442319  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.442466  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.442624  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.442834  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.443017  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.443028  257176 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767488 && echo "ha-767488" | sudo tee /etc/hostname
	I0729 12:28:29.574562  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:28:29.574598  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.577319  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.577768  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.577796  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.577984  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.578163  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.578349  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.578522  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.578697  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.578860  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.578875  257176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767488/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:28:29.694293  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:28:29.694324  257176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:28:29.694371  257176 buildroot.go:174] setting up certificates
	I0729 12:28:29.694382  257176 provision.go:84] configureAuth start
	I0729 12:28:29.694404  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.694702  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:28:29.697510  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.697893  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.697924  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.698075  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.700392  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.700707  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.700736  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.700956  257176 provision.go:143] copyHostCerts
	I0729 12:28:29.700988  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:28:29.701018  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 12:28:29.701026  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:28:29.701092  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:28:29.701180  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:28:29.701196  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 12:28:29.701203  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:28:29.701232  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:28:29.701337  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:28:29.701356  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 12:28:29.701363  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:28:29.701386  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:28:29.701443  257176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.ha-767488 san=[127.0.0.1 192.168.39.217 ha-767488 localhost minikube]
	I0729 12:28:29.865634  257176 provision.go:177] copyRemoteCerts
	I0729 12:28:29.865706  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:28:29.865737  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.868239  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.868633  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.868668  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.868894  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.869091  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.869258  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.869404  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:28:29.954969  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:28:29.955070  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 12:28:29.983588  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:28:29.983664  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:28:30.008507  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:28:30.008564  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 12:28:30.033341  257176 provision.go:87] duration metric: took 338.942174ms to configureAuth
	I0729 12:28:30.033370  257176 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:28:30.033650  257176 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:28:30.033738  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:30.036595  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:30.037005  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:30.037034  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:30.037194  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:30.037406  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:30.037590  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:30.037757  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:30.037917  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:30.038088  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:30.038102  257176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:30:00.889607  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:30:00.889647  257176 machine.go:97] duration metric: took 1m31.570380134s to provisionDockerMachine
	I0729 12:30:00.889661  257176 start.go:293] postStartSetup for "ha-767488" (driver="kvm2")
	I0729 12:30:00.889671  257176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:30:00.889688  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:00.890061  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:30:00.890101  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:00.893255  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:00.893756  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:00.893776  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:00.893964  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:00.894195  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:00.894355  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:00.894488  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:00.985670  257176 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:30:00.990118  257176 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:30:00.990148  257176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:30:00.990216  257176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:30:00.990282  257176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 12:30:00.990293  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 12:30:00.990393  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:30:01.000194  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:30:01.026191  257176 start.go:296] duration metric: took 136.51077ms for postStartSetup
	I0729 12:30:01.026247  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.026593  257176 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 12:30:01.026621  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.029199  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.029572  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.029595  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.029738  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.029944  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.030081  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.030227  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	W0729 12:30:01.115131  257176 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 12:30:01.115161  257176 fix.go:56] duration metric: took 1m31.815919439s for fixHost
	I0729 12:30:01.115184  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.117586  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.117880  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.117908  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.118141  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.118375  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.118566  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.118718  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.118901  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:30:01.119139  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:30:01.119158  257176 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 12:30:01.229703  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722256201.208888269
	
	I0729 12:30:01.229730  257176 fix.go:216] guest clock: 1722256201.208888269
	I0729 12:30:01.229740  257176 fix.go:229] Guest: 2024-07-29 12:30:01.208888269 +0000 UTC Remote: 2024-07-29 12:30:01.115168505 +0000 UTC m=+91.939593395 (delta=93.719764ms)
	I0729 12:30:01.229788  257176 fix.go:200] guest clock delta is within tolerance: 93.719764ms
	I0729 12:30:01.229811  257176 start.go:83] releasing machines lock for "ha-767488", held for 1m31.930567231s
	I0729 12:30:01.229843  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.230107  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:30:01.232737  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.233111  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.233145  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.233363  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.233889  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.234111  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.234230  257176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:30:01.234695  257176 ssh_runner.go:195] Run: cat /version.json
	I0729 12:30:01.234732  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.234779  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.238055  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238191  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238449  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.238476  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238583  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.238695  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.238714  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238744  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.238859  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.238932  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.239053  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.239125  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:01.239217  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.239383  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:01.342923  257176 ssh_runner.go:195] Run: systemctl --version
	I0729 12:30:01.349719  257176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:30:01.510709  257176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:30:01.520723  257176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:30:01.520829  257176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:30:01.530564  257176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:30:01.530598  257176 start.go:495] detecting cgroup driver to use...
	I0729 12:30:01.530671  257176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:30:01.547174  257176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:30:01.561910  257176 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:30:01.561979  257176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:30:01.585740  257176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:30:01.618564  257176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:30:01.783506  257176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:30:01.940620  257176 docker.go:233] disabling docker service ...
	I0729 12:30:01.940698  257176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:30:01.959815  257176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:30:01.974713  257176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:30:02.128949  257176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:30:02.297303  257176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:30:02.311979  257176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:30:02.332382  257176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:30:02.332459  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.344118  257176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:30:02.344185  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.355791  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.367033  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.377875  257176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:30:02.389970  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.401378  257176 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.413069  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.423934  257176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:30:02.433485  257176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:30:02.443209  257176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:30:02.597078  257176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:30:06.946792  257176 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.349677004s)
	I0729 12:30:06.946822  257176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:30:06.946866  257176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:30:06.951885  257176 start.go:563] Will wait 60s for crictl version
	I0729 12:30:06.951947  257176 ssh_runner.go:195] Run: which crictl
	I0729 12:30:06.955891  257176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:30:06.996933  257176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:30:06.997009  257176 ssh_runner.go:195] Run: crio --version
	I0729 12:30:07.029517  257176 ssh_runner.go:195] Run: crio --version
	I0729 12:30:07.067863  257176 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:30:07.069386  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:30:07.072261  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:07.072653  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:07.072677  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:07.072963  257176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:30:07.077985  257176 kubeadm.go:883] updating cluster {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:30:07.078159  257176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:30:07.078210  257176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:30:07.131360  257176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:30:07.131380  257176 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:30:07.131434  257176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:30:07.166976  257176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:30:07.167006  257176 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:30:07.167019  257176 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0729 12:30:07.167163  257176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-767488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:30:07.167263  257176 ssh_runner.go:195] Run: crio config
	I0729 12:30:07.218394  257176 cni.go:84] Creating CNI manager for ""
	I0729 12:30:07.218416  257176 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:30:07.218425  257176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:30:07.218446  257176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-767488 NodeName:ha-767488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:30:07.218636  257176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-767488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:30:07.218660  257176 kube-vip.go:115] generating kube-vip config ...
	I0729 12:30:07.218715  257176 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 12:30:07.231281  257176 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 12:30:07.231382  257176 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 12:30:07.231469  257176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:30:07.241143  257176 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:30:07.241203  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 12:30:07.251296  257176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 12:30:07.268752  257176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:30:07.286269  257176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 12:30:07.306290  257176 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 12:30:07.325270  257176 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 12:30:07.330227  257176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:30:07.480445  257176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:30:07.495284  257176 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488 for IP: 192.168.39.217
	I0729 12:30:07.495312  257176 certs.go:194] generating shared ca certs ...
	I0729 12:30:07.495334  257176 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.495514  257176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:30:07.495585  257176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:30:07.495600  257176 certs.go:256] generating profile certs ...
	I0729 12:30:07.495692  257176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/client.key
	I0729 12:30:07.495719  257176 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293
	I0729 12:30:07.495734  257176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.45 192.168.39.210 192.168.39.254]
	I0729 12:30:07.554302  257176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 ...
	I0729 12:30:07.554335  257176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293: {Name:mkc55706e98723442a7209c78a851c6aeec63640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.554502  257176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293 ...
	I0729 12:30:07.554512  257176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293: {Name:mkd6b648aa8c639f0f8174c6258aa3c28a419e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.554579  257176 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt
	I0729 12:30:07.554733  257176 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key
	I0729 12:30:07.554863  257176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key
	I0729 12:30:07.554878  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:30:07.554890  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:30:07.554905  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:30:07.554917  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:30:07.554930  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:30:07.554942  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:30:07.554954  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:30:07.554966  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:30:07.555012  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 12:30:07.555038  257176 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 12:30:07.555053  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:30:07.555074  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:30:07.555094  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:30:07.555113  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:30:07.555149  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:30:07.555175  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 12:30:07.555188  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:07.555200  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 12:30:07.555742  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:30:07.581960  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:30:07.606534  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:30:07.651322  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:30:07.734079  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 12:30:07.843422  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:30:07.919383  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:30:08.009302  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:30:08.114819  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 12:30:08.177084  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:30:08.323565  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 12:30:08.418339  257176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:30:08.452890  257176 ssh_runner.go:195] Run: openssl version
	I0729 12:30:08.463083  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 12:30:08.481125  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.488340  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.488407  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.496532  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 12:30:08.512456  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 12:30:08.528227  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.535939  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.536020  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.542124  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:30:08.556827  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:30:08.570963  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.578024  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.578072  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.583957  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:30:08.599010  257176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:30:08.609458  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:30:08.622965  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:30:08.645142  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:30:08.661889  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:30:08.733013  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:30:08.752828  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:30:08.763265  257176 kubeadm.go:392] StartCluster: {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:30:08.763447  257176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:30:08.763516  257176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:30:08.826291  257176 cri.go:89] found id: "a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad"
	I0729 12:30:08.826316  257176 cri.go:89] found id: "5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887"
	I0729 12:30:08.826319  257176 cri.go:89] found id: "f39e050cd5cc4b05a81e93b2261e728d2c07bc7c1daa3162edfde11e82a4620c"
	I0729 12:30:08.826323  257176 cri.go:89] found id: "5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf"
	I0729 12:30:08.826325  257176 cri.go:89] found id: "a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27"
	I0729 12:30:08.826329  257176 cri.go:89] found id: "c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d"
	I0729 12:30:08.826331  257176 cri.go:89] found id: "ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0"
	I0729 12:30:08.826334  257176 cri.go:89] found id: "e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316"
	I0729 12:30:08.826336  257176 cri.go:89] found id: "a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1"
	I0729 12:30:08.826341  257176 cri.go:89] found id: "14bf682e420cb00f83e39a018ac3723f16ed71fccee45180d30073e87b224475"
	I0729 12:30:08.826343  257176 cri.go:89] found id: "f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb"
	I0729 12:30:08.826345  257176 cri.go:89] found id: "dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a"
	I0729 12:30:08.826348  257176 cri.go:89] found id: "d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91"
	I0729 12:30:08.826351  257176 cri.go:89] found id: "70136b17c65dd39a4d8ff8ecf6e4c4229432e46ce9fcbae7271cb05229ee641d"
	I0729 12:30:08.826356  257176 cri.go:89] found id: ""
	I0729 12:30:08.826397  257176 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-767488 -v=7 --alsologtostderr" : exit status 105
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-767488
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-767488 -n ha-767488
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 logs -n 25: (1.662767672s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m02:/home/docker/cp-test_ha-767488-m03_ha-767488-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m04 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp testdata/cp-test.txt                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488:/home/docker/cp-test_ha-767488-m04_ha-767488.txt                       |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488 sudo cat                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488.txt                                 |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m02:/home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03:/home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m03 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-767488 node stop m02 -v=7                                                     | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-767488 node start m02 -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488 -v=7                                                           | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-767488 -v=7                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:28:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:28:29.213184  257176 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:28:29.213435  257176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:28:29.213444  257176 out.go:304] Setting ErrFile to fd 2...
	I0729 12:28:29.213448  257176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:28:29.213604  257176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:28:29.214122  257176 out.go:298] Setting JSON to false
	I0729 12:28:29.215063  257176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7852,"bootTime":1722248257,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:28:29.215118  257176 start.go:139] virtualization: kvm guest
	I0729 12:28:29.217142  257176 out.go:177] * [ha-767488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:28:29.218351  257176 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:28:29.218358  257176 notify.go:220] Checking for updates...
	I0729 12:28:29.220405  257176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:28:29.221684  257176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:28:29.222900  257176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:28:29.224025  257176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:28:29.225157  257176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:28:29.226709  257176 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:28:29.226808  257176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:28:29.227211  257176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:28:29.227254  257176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:28:29.242929  257176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0729 12:28:29.243340  257176 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:28:29.243859  257176 main.go:141] libmachine: Using API Version  1
	I0729 12:28:29.243878  257176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:28:29.244194  257176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:28:29.244404  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.277920  257176 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:28:29.279142  257176 start.go:297] selected driver: kvm2
	I0729 12:28:29.279164  257176 start.go:901] validating driver "kvm2" against &{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:28:29.279323  257176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:28:29.279655  257176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:28:29.279742  257176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:28:29.294785  257176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:28:29.295450  257176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:28:29.295597  257176 cni.go:84] Creating CNI manager for ""
	I0729 12:28:29.295609  257176 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:28:29.295668  257176 start.go:340] cluster config:
	{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:28:29.295787  257176 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:28:29.297555  257176 out.go:177] * Starting "ha-767488" primary control-plane node in "ha-767488" cluster
	I0729 12:28:29.298735  257176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:28:29.298761  257176 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:28:29.298770  257176 cache.go:56] Caching tarball of preloaded images
	I0729 12:28:29.298837  257176 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:28:29.298847  257176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:28:29.298958  257176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:28:29.299164  257176 start.go:360] acquireMachinesLock for ha-767488: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:28:29.299217  257176 start.go:364] duration metric: took 29.143µs to acquireMachinesLock for "ha-767488"
	I0729 12:28:29.299236  257176 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:28:29.299241  257176 fix.go:54] fixHost starting: 
	I0729 12:28:29.299513  257176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:28:29.299545  257176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:28:29.313514  257176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
	I0729 12:28:29.313936  257176 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:28:29.314395  257176 main.go:141] libmachine: Using API Version  1
	I0729 12:28:29.314416  257176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:28:29.314828  257176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:28:29.315041  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.315199  257176 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:28:29.316538  257176 fix.go:112] recreateIfNeeded on ha-767488: state=Running err=<nil>
	W0729 12:28:29.316562  257176 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:28:29.318256  257176 out.go:177] * Updating the running kvm2 "ha-767488" VM ...
	I0729 12:28:29.319254  257176 machine.go:94] provisionDockerMachine start ...
	I0729 12:28:29.319272  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.319461  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.321717  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.322169  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.322198  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.322326  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.322496  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.322637  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.322767  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.322944  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.323131  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.323141  257176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:28:29.438235  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:28:29.438263  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.438523  257176 buildroot.go:166] provisioning hostname "ha-767488"
	I0729 12:28:29.438557  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.438793  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.441520  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.441975  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.442000  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.442119  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.442319  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.442466  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.442624  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.442834  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.443017  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.443028  257176 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767488 && echo "ha-767488" | sudo tee /etc/hostname
	I0729 12:28:29.574562  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:28:29.574598  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.577319  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.577768  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.577796  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.577984  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.578163  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.578349  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.578522  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.578697  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.578860  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.578875  257176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767488/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:28:29.694293  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:28:29.694324  257176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:28:29.694371  257176 buildroot.go:174] setting up certificates
	I0729 12:28:29.694382  257176 provision.go:84] configureAuth start
	I0729 12:28:29.694404  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.694702  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:28:29.697510  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.697893  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.697924  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.698075  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.700392  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.700707  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.700736  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.700956  257176 provision.go:143] copyHostCerts
	I0729 12:28:29.700988  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:28:29.701018  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 12:28:29.701026  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:28:29.701092  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:28:29.701180  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:28:29.701196  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 12:28:29.701203  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:28:29.701232  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:28:29.701337  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:28:29.701356  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 12:28:29.701363  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:28:29.701386  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:28:29.701443  257176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.ha-767488 san=[127.0.0.1 192.168.39.217 ha-767488 localhost minikube]
	I0729 12:28:29.865634  257176 provision.go:177] copyRemoteCerts
	I0729 12:28:29.865706  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:28:29.865737  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.868239  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.868633  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.868668  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.868894  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.869091  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.869258  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.869404  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:28:29.954969  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:28:29.955070  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 12:28:29.983588  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:28:29.983664  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:28:30.008507  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:28:30.008564  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 12:28:30.033341  257176 provision.go:87] duration metric: took 338.942174ms to configureAuth
	I0729 12:28:30.033370  257176 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:28:30.033650  257176 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:28:30.033738  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:30.036595  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:30.037005  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:30.037034  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:30.037194  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:30.037406  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:30.037590  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:30.037757  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:30.037917  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:30.038088  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:30.038102  257176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:30:00.889607  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:30:00.889647  257176 machine.go:97] duration metric: took 1m31.570380134s to provisionDockerMachine
	I0729 12:30:00.889661  257176 start.go:293] postStartSetup for "ha-767488" (driver="kvm2")
	I0729 12:30:00.889671  257176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:30:00.889688  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:00.890061  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:30:00.890101  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:00.893255  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:00.893756  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:00.893776  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:00.893964  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:00.894195  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:00.894355  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:00.894488  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:00.985670  257176 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:30:00.990118  257176 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:30:00.990148  257176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:30:00.990216  257176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:30:00.990282  257176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 12:30:00.990293  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 12:30:00.990393  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:30:01.000194  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:30:01.026191  257176 start.go:296] duration metric: took 136.51077ms for postStartSetup
	I0729 12:30:01.026247  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.026593  257176 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 12:30:01.026621  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.029199  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.029572  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.029595  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.029738  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.029944  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.030081  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.030227  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	W0729 12:30:01.115131  257176 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 12:30:01.115161  257176 fix.go:56] duration metric: took 1m31.815919439s for fixHost
	I0729 12:30:01.115184  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.117586  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.117880  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.117908  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.118141  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.118375  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.118566  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.118718  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.118901  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:30:01.119139  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:30:01.119158  257176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:30:01.229703  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722256201.208888269
	
	I0729 12:30:01.229730  257176 fix.go:216] guest clock: 1722256201.208888269
	I0729 12:30:01.229740  257176 fix.go:229] Guest: 2024-07-29 12:30:01.208888269 +0000 UTC Remote: 2024-07-29 12:30:01.115168505 +0000 UTC m=+91.939593395 (delta=93.719764ms)
	I0729 12:30:01.229788  257176 fix.go:200] guest clock delta is within tolerance: 93.719764ms
	I0729 12:30:01.229811  257176 start.go:83] releasing machines lock for "ha-767488", held for 1m31.930567231s
	I0729 12:30:01.229843  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.230107  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:30:01.232737  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.233111  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.233145  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.233363  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.233889  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.234111  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.234230  257176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:30:01.234695  257176 ssh_runner.go:195] Run: cat /version.json
	I0729 12:30:01.234732  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.234779  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.238055  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238191  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238449  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.238476  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238583  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.238695  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.238714  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238744  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.238859  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.238932  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.239053  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.239125  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:01.239217  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.239383  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:01.342923  257176 ssh_runner.go:195] Run: systemctl --version
	I0729 12:30:01.349719  257176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:30:01.510709  257176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:30:01.520723  257176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:30:01.520829  257176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:30:01.530564  257176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:30:01.530598  257176 start.go:495] detecting cgroup driver to use...
	I0729 12:30:01.530671  257176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:30:01.547174  257176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:30:01.561910  257176 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:30:01.561979  257176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:30:01.585740  257176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:30:01.618564  257176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:30:01.783506  257176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:30:01.940620  257176 docker.go:233] disabling docker service ...
	I0729 12:30:01.940698  257176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:30:01.959815  257176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:30:01.974713  257176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:30:02.128949  257176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:30:02.297303  257176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:30:02.311979  257176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:30:02.332382  257176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:30:02.332459  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.344118  257176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:30:02.344185  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.355791  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.367033  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.377875  257176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:30:02.389970  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.401378  257176 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.413069  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.423934  257176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:30:02.433485  257176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:30:02.443209  257176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:30:02.597078  257176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:30:06.946792  257176 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.349677004s)
	I0729 12:30:06.946822  257176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:30:06.946866  257176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:30:06.951885  257176 start.go:563] Will wait 60s for crictl version
	I0729 12:30:06.951947  257176 ssh_runner.go:195] Run: which crictl
	I0729 12:30:06.955891  257176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:30:06.996933  257176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:30:06.997009  257176 ssh_runner.go:195] Run: crio --version
	I0729 12:30:07.029517  257176 ssh_runner.go:195] Run: crio --version
	I0729 12:30:07.067863  257176 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:30:07.069386  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:30:07.072261  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:07.072653  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:07.072677  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:07.072963  257176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:30:07.077985  257176 kubeadm.go:883] updating cluster {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:30:07.078159  257176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:30:07.078210  257176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:30:07.131360  257176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:30:07.131380  257176 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:30:07.131434  257176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:30:07.166976  257176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:30:07.167006  257176 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:30:07.167019  257176 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0729 12:30:07.167163  257176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-767488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:30:07.167263  257176 ssh_runner.go:195] Run: crio config
	I0729 12:30:07.218394  257176 cni.go:84] Creating CNI manager for ""
	I0729 12:30:07.218416  257176 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:30:07.218425  257176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:30:07.218446  257176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-767488 NodeName:ha-767488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:30:07.218636  257176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-767488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:30:07.218660  257176 kube-vip.go:115] generating kube-vip config ...
	I0729 12:30:07.218715  257176 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 12:30:07.231281  257176 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 12:30:07.231382  257176 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 12:30:07.231469  257176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:30:07.241143  257176 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:30:07.241203  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 12:30:07.251296  257176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 12:30:07.268752  257176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:30:07.286269  257176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 12:30:07.306290  257176 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 12:30:07.325270  257176 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 12:30:07.330227  257176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:30:07.480445  257176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:30:07.495284  257176 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488 for IP: 192.168.39.217
	I0729 12:30:07.495312  257176 certs.go:194] generating shared ca certs ...
	I0729 12:30:07.495334  257176 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.495514  257176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:30:07.495585  257176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:30:07.495600  257176 certs.go:256] generating profile certs ...
	I0729 12:30:07.495692  257176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/client.key
	I0729 12:30:07.495719  257176 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293
	I0729 12:30:07.495734  257176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.45 192.168.39.210 192.168.39.254]
	I0729 12:30:07.554302  257176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 ...
	I0729 12:30:07.554335  257176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293: {Name:mkc55706e98723442a7209c78a851c6aeec63640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.554502  257176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293 ...
	I0729 12:30:07.554512  257176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293: {Name:mkd6b648aa8c639f0f8174c6258aa3c28a419e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.554579  257176 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt
	I0729 12:30:07.554733  257176 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key
	I0729 12:30:07.554863  257176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key
	I0729 12:30:07.554878  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:30:07.554890  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:30:07.554905  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:30:07.554917  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:30:07.554930  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:30:07.554942  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:30:07.554954  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:30:07.554966  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:30:07.555012  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 12:30:07.555038  257176 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 12:30:07.555053  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:30:07.555074  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:30:07.555094  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:30:07.555113  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:30:07.555149  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:30:07.555175  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 12:30:07.555188  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:07.555200  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 12:30:07.555742  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:30:07.581960  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:30:07.606534  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:30:07.651322  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:30:07.734079  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 12:30:07.843422  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:30:07.919383  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:30:08.009302  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:30:08.114819  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 12:30:08.177084  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:30:08.323565  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 12:30:08.418339  257176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:30:08.452890  257176 ssh_runner.go:195] Run: openssl version
	I0729 12:30:08.463083  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 12:30:08.481125  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.488340  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.488407  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.496532  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 12:30:08.512456  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 12:30:08.528227  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.535939  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.536020  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.542124  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:30:08.556827  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:30:08.570963  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.578024  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.578072  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.583957  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:30:08.599010  257176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:30:08.609458  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:30:08.622965  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:30:08.645142  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:30:08.661889  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:30:08.733013  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:30:08.752828  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:30:08.763265  257176 kubeadm.go:392] StartCluster: {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:30:08.763447  257176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:30:08.763516  257176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:30:08.826291  257176 cri.go:89] found id: "a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad"
	I0729 12:30:08.826316  257176 cri.go:89] found id: "5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887"
	I0729 12:30:08.826319  257176 cri.go:89] found id: "f39e050cd5cc4b05a81e93b2261e728d2c07bc7c1daa3162edfde11e82a4620c"
	I0729 12:30:08.826323  257176 cri.go:89] found id: "5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf"
	I0729 12:30:08.826325  257176 cri.go:89] found id: "a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27"
	I0729 12:30:08.826329  257176 cri.go:89] found id: "c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d"
	I0729 12:30:08.826331  257176 cri.go:89] found id: "ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0"
	I0729 12:30:08.826334  257176 cri.go:89] found id: "e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316"
	I0729 12:30:08.826336  257176 cri.go:89] found id: "a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1"
	I0729 12:30:08.826341  257176 cri.go:89] found id: "14bf682e420cb00f83e39a018ac3723f16ed71fccee45180d30073e87b224475"
	I0729 12:30:08.826343  257176 cri.go:89] found id: "f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb"
	I0729 12:30:08.826345  257176 cri.go:89] found id: "dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a"
	I0729 12:30:08.826348  257176 cri.go:89] found id: "d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91"
	I0729 12:30:08.826351  257176 cri.go:89] found id: "70136b17c65dd39a4d8ff8ecf6e4c4229432e46ce9fcbae7271cb05229ee641d"
	I0729 12:30:08.826356  257176 cri.go:89] found id: ""
	I0729 12:30:08.826397  257176 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.770617815Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256669770597003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ade1f102-f296-4722-ba42-880002b1db1d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.771098032Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92f9fc48-d83d-40ce-b6ec-4de5f3f3bacc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.771171792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92f9fc48-d83d-40ce-b6ec-4de5f3f3bacc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.771607268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92f9fc48-d83d-40ce-b6ec-4de5f3f3bacc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.812860658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5dd905f-9864-4b96-a125-e9af266c908c name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.812994998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5dd905f-9864-4b96-a125-e9af266c908c name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.814736020Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0811699a-5523-46db-a279-2e27d6a97f1b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.815309971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256669815286402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0811699a-5523-46db-a279-2e27d6a97f1b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.815687076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfe72692-919c-4826-9c00-1b6e0bea6896 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.815740618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfe72692-919c-4826-9c00-1b6e0bea6896 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.816239147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfe72692-919c-4826-9c00-1b6e0bea6896 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.858000909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61b7a942-d9e5-4dcb-acf5-20fe04a54cf9 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.858110745Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61b7a942-d9e5-4dcb-acf5-20fe04a54cf9 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.859344868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4636fa30-7856-4e20-aa15-87dfdc127f48 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.859916540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256669859760240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4636fa30-7856-4e20-aa15-87dfdc127f48 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.860607709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7d92ae0-f234-4744-b9b5-dfa15a264cff name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.860683010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7d92ae0-f234-4744-b9b5-dfa15a264cff name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.861164892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7d92ae0-f234-4744-b9b5-dfa15a264cff name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.903108249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb80ce66-2ef2-487f-82cf-37498ff5e0f5 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.903196216Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb80ce66-2ef2-487f-82cf-37498ff5e0f5 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.903995870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b74f74a4-9aee-40c5-9f6b-4d92f54afa26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.904417976Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256669904397333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b74f74a4-9aee-40c5-9f6b-4d92f54afa26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.904919408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8d389d5-5428-4d63-a22f-380491491849 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.904992939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8d389d5-5428-4d63-a22f-380491491849 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:49 ha-767488 crio[3370]: time="2024-07-29 12:37:49.905460049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8d389d5-5428-4d63-a22f-380491491849 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66eeaa3de5dde       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Running             kube-controller-manager   4                   77ae4bca5cb19       kube-controller-manager-ha-767488
	149dfcffe55a7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Exited              kube-controller-manager   3                   77ae4bca5cb19       kube-controller-manager-ha-767488
	3f1e978a01d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      6 minutes ago       Running             busybox                   1                   6ff1b7f6ad731       busybox-fc5497c4f-4ppv4
	cbbea78e99e72       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      7 minutes ago       Running             busybox                   1                   a7dc5254878c7       busybox-fc5497c4f-trgfp
	7ffae0e726786       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      7 minutes ago       Running             kube-vip                  0                   4ac1d50b066bb       kube-vip-ha-767488
	d899a73918641       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   1                   464e80f1474da       coredns-7db6d8ff4d-k6r5l
	88ec5aa0ed7ec       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                1                   4e921577c4923       kube-proxy-sqk96
	45379775c471b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   1                   6fd6fea36e81f       coredns-7db6d8ff4d-qqt5t
	76b855b3ad75b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       1                   3b6ba7ca06eb5       storage-provisioner
	547d6699a30a2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            1                   874397bc99826       kube-apiserver-ha-767488
	a327747c60c54       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      7 minutes ago       Running             kindnet-cni               1                   ebff2bebd5529       kindnet-6x56p
	5e886bb5a4a2e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            1                   4d030101f0f82       kube-scheduler-ha-767488
	5c8cded716df9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      1                   c38a2d43be153       etcd-ha-767488
	79b136a6e0ea0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   93f5e8a8985f2       busybox-fc5497c4f-trgfp
	3f7b67549d5f7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   841baabfcb1b9       busybox-fc5497c4f-4ppv4
	a26ce9fba519a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Exited              storage-provisioner       0                   fa4b77fe094c4       storage-provisioner
	c263b16acab21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   c2f0a3db73b36       coredns-7db6d8ff4d-k6r5l
	ed92faf8d1c93       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   721397f12db8c       coredns-7db6d8ff4d-qqt5t
	e2114078a73c1       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   a4aeb6b1329f7       kindnet-6x56p
	a99c50ffbfb28       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   7ba65a0686e20       kube-proxy-sqk96
	f1ea8fbc1b3ff       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   f96272e7bee5b       kube-scheduler-ha-767488
	dab08a0e0f3c1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   489cc61ac2d59       etcd-ha-767488
	d427719357ecf       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      16 minutes ago      Exited              kube-apiserver            0                   d3dacdbbe9ee4       kube-apiserver-ha-767488
	
	
	==> coredns [45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42790 - 48270 "HINFO IN 5378893488737017947.5532814832189282968. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010592186s
	
	
	==> coredns [c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d] <==
	[INFO] 10.244.0.5:56893 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.006105477s
	[INFO] 10.244.2.2:49553 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000142409s
	[INFO] 10.244.0.4:44644 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190983s
	[INFO] 10.244.0.4:50509 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000060342s
	[INFO] 10.244.0.4:50667 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000235509s
	[INFO] 10.244.0.4:44600 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002136398s
	[INFO] 10.244.0.5:59842 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008887594s
	[INFO] 10.244.0.5:50358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114137s
	[INFO] 10.244.2.2:51452 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000291176s
	[INFO] 10.244.2.2:40431 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000264874s
	[INFO] 10.244.2.2:35432 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167749s
	[INFO] 10.244.0.4:53618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068812s
	[INFO] 10.244.0.4:52172 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001711814s
	[INFO] 10.244.0.4:47059 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131595s
	[INFO] 10.244.0.4:39902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147006s
	[INFO] 10.244.0.4:37624 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173908s
	[INFO] 10.244.0.5:52999 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135418s
	[INFO] 10.244.2.2:39192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096392s
	[INFO] 10.244.2.2:47682 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102837s
	[INFO] 10.244.0.4:43135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079564s
	[INFO] 10.244.0.4:54022 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000229955s
	[INFO] 10.244.0.4:49468 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000035685s
	[INFO] 10.244.0.4:56523 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000031196s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36411 - 11278 "HINFO IN 1809215905934978785.4639219165358094612. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008507012s
	
	
	==> coredns [ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0] <==
	[INFO] 10.244.2.2:52575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121686s
	[INFO] 10.244.2.2:60306 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096516s
	[INFO] 10.244.2.2:56750 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001201569s
	[INFO] 10.244.0.4:53864 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076226s
	[INFO] 10.244.0.4:43895 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007603s
	[INFO] 10.244.0.4:49768 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001191618s
	[INFO] 10.244.0.4:36610 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091068s
	[INFO] 10.244.0.5:36533 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152157s
	[INFO] 10.244.0.5:59316 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006399s
	[INFO] 10.244.0.5:59406 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051375s
	[INFO] 10.244.0.5:56054 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054289s
	[INFO] 10.244.2.2:32902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000349565s
	[INFO] 10.244.2.2:56936 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214735s
	[INFO] 10.244.2.2:38037 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076517s
	[INFO] 10.244.2.2:33788 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066283s
	[INFO] 10.244.0.4:46469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080696s
	[INFO] 10.244.0.4:56376 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069276s
	[INFO] 10.244.0.4:41139 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003161s
	[INFO] 10.244.0.5:44822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123194s
	[INFO] 10.244.0.5:43997 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184384s
	[INFO] 10.244.0.5:34612 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094985s
	[INFO] 10.244.2.2:57694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131654s
	[INFO] 10.244.2.2:52834 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009944s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-767488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:37:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-767488
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4910accb98434efca56ff8b39068800c
	  System UUID:                4910accb-9843-4efc-a56f-f8b39068800c
	  Boot ID:                    f538ab8c-89b7-40ce-b82e-7644a867ee15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4ppv4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  default                     busybox-fc5497c4f-trgfp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-k6r5l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-qqt5t             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-767488                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-6x56p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-767488             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-767488    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-sqk96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-767488             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-767488                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m48s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-767488 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Warning  ContainerGCFailed        7m44s (x2 over 8m44s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m27s                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	
	
	Name:               ha-767488-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_22_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:22:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:37:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-767488-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9a3fe2d6456464f8574d4c1d95e4f21
	  System UUID:                d9a3fe2d-6456-464f-8574-d4c1d95e4f21
	  Boot ID:                    9ab58707-555a-4bb6-83c9-2399f8c434d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jjx77                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 etcd-ha-767488-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-l7jpd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-767488-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-767488-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-d9lg8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-767488-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-767488-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5m54s              kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 11m                kubelet          Node ha-767488-m02 has been rebooted, boot id: 9ab58707-555a-4bb6-83c9-2399f8c434d4
	  Normal   RegisteredNode           11m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Warning  ContainerGCFailed        6m55s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m27s              node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	
	
	Name:               ha-767488-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_23_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:23:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:26:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-767488-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ca168b2de41451a82ff59b787c535ad
	  System UUID:                5ca168b2-de41-451a-82ff-59b787c535ad
	  Boot ID:                    8622bb8b-ae59-41f0-afa6-05666f1768af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q6fnx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-767488-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-bz9pp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-767488-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-767488-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-tzj27                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-767488-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-767488-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  RegisteredNode           14m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  NodeNotReady             10m                node-controller  Node ha-767488-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           5m27s              node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	
	
	Name:               ha-767488-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_24_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:24:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:26:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-767488-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 326a5fb51aae42b7b8056fc3c9e53faf
	  System UUID:                326a5fb5-1aae-42b7-b805-6fc3c9e53faf
	  Boot ID:                    a14ff095-d004-41b8-991d-b6ed30b10920
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bgb2n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-2m5gr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-767488-m04 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  NodeNotReady             10m                node-controller  Node ha-767488-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           5m27s              node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	
	
	==> dmesg <==
	[  +0.059198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062192] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.161618] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.141343] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.277698] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.104100] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.675894] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060211] kauditd_printk_skb: 158 callbacks suppressed
	[Jul29 12:21] kauditd_printk_skb: 74 callbacks suppressed
	[  +3.541880] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[ +10.417395] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.584590] kauditd_printk_skb: 34 callbacks suppressed
	[Jul29 12:22] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 12:25] kauditd_printk_skb: 10 callbacks suppressed
	[Jul29 12:30] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +0.152481] systemd-fstab-generator[3296]: Ignoring "noauto" option for root device
	[  +0.201233] systemd-fstab-generator[3310]: Ignoring "noauto" option for root device
	[  +0.141805] systemd-fstab-generator[3322]: Ignoring "noauto" option for root device
	[  +0.319718] systemd-fstab-generator[3350]: Ignoring "noauto" option for root device
	[  +4.887179] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.088800] kauditd_printk_skb: 100 callbacks suppressed
	[  +9.359831] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.040803] kauditd_printk_skb: 30 callbacks suppressed
	[ +16.902160] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.806405] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf] <==
	{"level":"warn","ts":"2024-07-29T12:37:50.283743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.296417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.301333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.319372Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.320612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.334285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.343859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.348692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.351953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.361419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.370026Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.378567Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.382568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.384016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.386147Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.389178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.396252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.40408Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.412415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.415334Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.4183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.418401Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.423762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.431927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:50.439013Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> etcd [dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a] <==
	{"level":"info","ts":"2024-07-29T12:28:30.36201Z","caller":"etcdserver/server.go:1448","msg":"leadership transfer finished","local-member-id":"a09c9983ac28f1fd","old-leader-member-id":"a09c9983ac28f1fd","new-leader-member-id":"30f76e47e42605a5","took":"101.152061ms"}
	{"level":"info","ts":"2024-07-29T12:28:30.362301Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.362459Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362504Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.362589Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362622Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362871Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363054Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363107Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"30f76e47e42605a5","error":"failed to read 30f76e47e42605a5 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T12:28:30.363176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363422Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T12:28:30.363566Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.363616Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.363657Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.3637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.363751Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364622Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364716Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364883Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.370716Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"warn","ts":"2024-07-29T12:28:30.370988Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.45:55490","server-name":"","error":"read tcp 192.168.39.217:2380->192.168.39.45:55490: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:28:30.371604Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.45:55480","server-name":"","error":"set tcp 192.168.39.217:2380: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T12:28:31.371639Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:28:31.37168Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> kernel <==
	 12:37:50 up 17 min,  0 users,  load average: 0.31, 0.35, 0.27
	Linux ha-767488 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad] <==
	I0729 12:37:19.360506       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:37:29.352129       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:37:29.352281       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:37:29.352464       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:37:29.352509       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:37:29.352585       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:37:29.352605       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:37:29.352682       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:37:29.352702       1 main.go:299] handling current node
	I0729 12:37:39.352575       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:37:39.352764       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:37:39.352991       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:37:39.353023       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:37:39.353137       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:37:39.353160       1 main.go:299] handling current node
	I0729 12:37:39.353205       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:37:39.353234       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:37:49.361413       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:37:49.361470       1 main.go:299] handling current node
	I0729 12:37:49.361484       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:37:49.361489       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:37:49.361636       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:37:49.361763       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:37:49.361901       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:37:49.361925       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316] <==
	I0729 12:27:52.596723       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:02.599761       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:02.599920       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:02.600088       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:02.600113       1 main.go:299] handling current node
	I0729 12:28:02.600135       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:02.600152       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:02.600239       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:02.600258       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:12.602416       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:12.602457       1 main.go:299] handling current node
	I0729 12:28:12.602474       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:12.602503       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:12.602642       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:12.602666       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:12.602727       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:12.602746       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:22.595743       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:22.595784       1 main.go:299] handling current node
	I0729 12:28:22.595836       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:22.595843       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:22.596051       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:22.596107       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:22.596246       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:22.596285       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3] <==
	Trace[2068681805]: ---"Objects listed" error:etcdserver: request timed out 13010ms (12:31:52.763)
	Trace[2068681805]: [13.010453158s] [13.010453158s] END
	E0729 12:31:52.763880       1 cacher.go:475] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0729 12:31:52.763899       1 reflector.go:547] storage/cacher.go:/leases: failed to list *coordination.Lease: etcdserver: request timed out
	I0729 12:31:52.763927       1 trace.go:236] Trace[967270758]: "Reflector ListAndWatch" name:storage/cacher.go:/leases (29-Jul-2024 12:31:39.759) (total time: 13004ms):
	Trace[967270758]: ---"Objects listed" error:etcdserver: request timed out 13004ms (12:31:52.763)
	Trace[967270758]: [13.004565083s] [13.004565083s] END
	E0729 12:31:52.763936       1 cacher.go:475] cacher (leases.coordination.k8s.io): unexpected ListAndWatch error: failed to list *coordination.Lease: etcdserver: request timed out; reinitializing...
	W0729 12:31:52.763958       1 reflector.go:547] storage/cacher.go:/csidrivers: failed to list *storage.CSIDriver: etcdserver: request timed out
	I0729 12:31:52.763994       1 trace.go:236] Trace[710917587]: "Reflector ListAndWatch" name:storage/cacher.go:/csidrivers (29-Jul-2024 12:31:39.753) (total time: 13010ms):
	Trace[710917587]: ---"Objects listed" error:etcdserver: request timed out 13010ms (12:31:52.763)
	Trace[710917587]: [13.010204143s] [13.010204143s] END
	E0729 12:31:52.764016       1 cacher.go:475] cacher (csidrivers.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.CSIDriver: etcdserver: request timed out; reinitializing...
	E0729 12:31:53.729170       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: leader changed"}: etcdserver: leader changed
	I0729 12:31:53.729368       1 trace.go:236] Trace[902411417]: "Get" accept:application/json, */*,audit-id:fa59e21f-4667-469a-816a-73d1af07e054,client:192.168.39.1,api-group:,api-version:v1,name:ha-767488-m02,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-767488-m02,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Jul-2024 12:31:46.608) (total time: 7120ms):
	Trace[902411417]: [7.120424258s] [7.120424258s] END
	E0729 12:31:53.729518       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: leader changed"}: etcdserver: leader changed
	E0729 12:31:53.729587       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: leader changed"}: etcdserver: leader changed
	I0729 12:31:53.731004       1 trace.go:236] Trace[967601293]: "Get" accept:application/json, */*,audit-id:e6e55dfb-efc1-46e2-8f8e-bb982027ae68,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Jul-2024 12:31:47.943) (total time: 5787ms):
	Trace[967601293]: [5.787057446s] [5.787057446s] END
	I0729 12:31:53.731377       1 trace.go:236] Trace[1545811589]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:392ea8d0-13cd-4c24-b7ae-e13a5045beef,client:127.0.0.1,api-group:rbac.authorization.k8s.io,api-version:v1,name:system:controller:persistent-volume-binder,subresource:,namespace:,protocol:HTTP/2.0,resource:clusterroles,scope:resource,url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:GET (29-Jul-2024 12:31:45.753) (total time: 7977ms):
	Trace[1545811589]: [7.977997758s] [7.977997758s] END
	E0729 12:31:53.731759       1 storage_rbac.go:232] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder: etcdserver: leader changed
	W0729 12:31:54.518380       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.45]
	W0729 12:32:14.519919       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	
	
	==> kube-apiserver [d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91] <==
	I0729 12:28:30.187767       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.188600       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:28:30.189914       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 12:28:30.189977       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0729 12:28:30.190012       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0729 12:28:30.192079       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.192120       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.192128       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.192141       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.194141       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 12:28:30.194189       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 12:28:30.198902       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0729 12:28:30.204074       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 12:28:30.205442       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 12:28:30.213471       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.213575       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.213659       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214057       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214205       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214277       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214426       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214629       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214728       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214894       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214965       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f] <==
	I0729 12:31:21.148099       1 serving.go:380] Generated self-signed cert in-memory
	I0729 12:31:21.413326       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 12:31:21.413367       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:31:21.414936       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:31:21.415020       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 12:31:21.415139       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 12:31:21.415338       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0729 12:31:31.426553       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b] <==
	I0729 12:32:23.322258       1 shared_informer.go:320] Caches are synced for namespace
	I0729 12:32:23.375869       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 12:32:23.381302       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:32:23.385073       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:32:23.407529       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 12:32:23.421636       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 12:32:23.439354       1 shared_informer.go:320] Caches are synced for taint
	I0729 12:32:23.439677       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 12:32:23.484698       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488"
	I0729 12:32:23.484766       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m03"
	I0729 12:32:23.484846       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m04"
	I0729 12:32:23.484966       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m02"
	I0729 12:32:23.485114       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 12:32:23.525676       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 12:32:23.930267       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:32:23.955995       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:32:23.956031       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 12:37:23.555782       1 taint_eviction.go:113] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-fc5497c4f-q6fnx"
	I0729 12:37:23.577937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.269µs"
	I0729 12:37:23.632551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.059928ms"
	I0729 12:37:23.644732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.12292ms"
	I0729 12:37:23.645012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.899µs"
	I0729 12:37:23.654010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.855µs"
	I0729 12:37:28.528253       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.910657ms"
	I0729 12:37:28.528967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.66µs"
	
	
	==> kube-proxy [88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770] <==
	W0729 12:31:17.310464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311579       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311601       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:31:17.311772       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:17.311782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:26.526651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:26.526903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:29.599349       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:29.599438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:29.599511       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:29.599552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:29.599615       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 12:31:41.887061       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:31:41.886328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:41.887293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:54.176621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:54.176873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:54.177234       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:31:54.179432       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:54.179489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:32:20.696876       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:32:29.397880       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:32:37.296532       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1] <==
	I0729 12:21:17.255773       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:21:17.286934       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	I0729 12:21:17.337677       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:21:17.337727       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:21:17.337746       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:21:17.340517       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:21:17.340710       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:21:17.340741       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:21:17.342294       1 config.go:192] "Starting service config controller"
	I0729 12:21:17.342534       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:21:17.342581       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:21:17.342586       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:21:17.343590       1 config.go:319] "Starting node config controller"
	I0729 12:21:17.343624       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:21:17.443485       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 12:21:17.443697       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:21:17.443586       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887] <==
	W0729 12:30:12.498631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 12:30:12.498658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 12:30:12.498717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 12:30:12.498744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 12:30:12.498839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:30:12.498867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:30:12.498916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:30:12.498942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:30:12.499009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:30:12.499036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:30:12.499077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:30:12.499101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:30:12.499133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 12:30:12.499159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 12:30:12.711214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 12:30:12.711269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 12:30:12.901782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:30:12.901876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:30:13.940936       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:30:13.940995       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:30:14.206492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 12:30:14.206526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:30:14.291769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:30:14.291892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0729 12:30:18.034771       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb] <==
	E0729 12:21:00.957258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:00.966559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:21:00.966602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:21:00.969971       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:21:00.970006       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:21:00.975481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:21:00.975514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:21:00.991207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:21:00.991302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:21:01.043730       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:21:01.043771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:21:01.201334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:21:01.201433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:01.269111       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:21:01.269202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 12:21:01.308519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:21:01.308567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:21:01.484192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:21:01.484242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:01.488207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 12:21:01.488410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 12:21:03.597444       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 12:24:50.794520       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bgb2n\": pod kindnet-bgb2n is already assigned to node \"ha-767488-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bgb2n" node="ha-767488-m04"
	E0729 12:24:50.794710       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bgb2n\": pod kindnet-bgb2n is already assigned to node \"ha-767488-m04\"" pod="kube-system/kindnet-bgb2n"
	E0729 12:28:30.163371       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 12:33:06 ha-767488 kubelet[1381]: E0729 12:33:06.688996    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:33:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:33:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:33:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:33:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:34:06 ha-767488 kubelet[1381]: E0729 12:34:06.683963    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:34:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:34:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:34:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:34:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:35:06 ha-767488 kubelet[1381]: E0729 12:35:06.682665    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:35:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:35:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:35:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:35:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:36:06 ha-767488 kubelet[1381]: E0729 12:36:06.683971    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:36:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:36:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:36:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:36:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:37:06 ha-767488 kubelet[1381]: E0729 12:37:06.685220    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:37:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:37:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:37:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:37:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:37:49.482039  259275 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19341-233093/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-767488 -n ha-767488
helpers_test.go:261: (dbg) Run:  kubectl --context ha-767488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (684.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (2.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-767488 node delete m03 -v=7 --alsologtostderr: exit status 83 (137.201533ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-767488-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-767488"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:37:51.510427  259355 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:37:51.510804  259355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:37:51.510823  259355 out.go:304] Setting ErrFile to fd 2...
	I0729 12:37:51.510827  259355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:37:51.511039  259355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:37:51.511394  259355 mustload.go:65] Loading cluster: ha-767488
	I0729 12:37:51.511787  259355 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:37:51.512145  259355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.512188  259355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.527312  259355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45941
	I0729 12:37:51.527798  259355 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.528498  259355 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.528518  259355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.528862  259355 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.529095  259355 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:37:51.530766  259355 host.go:66] Checking if "ha-767488" exists ...
	I0729 12:37:51.531152  259355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.531207  259355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.546090  259355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0729 12:37:51.546502  259355 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.547014  259355 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.547038  259355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.547382  259355 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.547602  259355 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:37:51.548216  259355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.548264  259355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.564219  259355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I0729 12:37:51.564738  259355 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.565275  259355 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.565305  259355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.565642  259355 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.565846  259355 main.go:141] libmachine: (ha-767488-m02) Calling .GetState
	I0729 12:37:51.567354  259355 host.go:66] Checking if "ha-767488-m02" exists ...
	I0729 12:37:51.567693  259355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.567740  259355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.582025  259355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I0729 12:37:51.582418  259355 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.582848  259355 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.582868  259355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.583214  259355 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.583421  259355 main.go:141] libmachine: (ha-767488-m02) Calling .DriverName
	I0729 12:37:51.583917  259355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.583982  259355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.598643  259355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I0729 12:37:51.599043  259355 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.599467  259355 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.599491  259355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.599820  259355 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.600009  259355 main.go:141] libmachine: (ha-767488-m03) Calling .GetState
	I0729 12:37:51.603489  259355 out.go:177] * The control-plane node ha-767488-m03 host is not running: state=Stopped
	I0729 12:37:51.604826  259355 out.go:177]   To start a cluster, run: "minikube start -p ha-767488"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-linux-amd64 -p ha-767488 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr: exit status 7 (452.064717ms)

                                                
                                                
-- stdout --
	ha-767488
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-767488-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-767488-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767488-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:37:51.649130  259397 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:37:51.649401  259397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:37:51.649410  259397 out.go:304] Setting ErrFile to fd 2...
	I0729 12:37:51.649414  259397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:37:51.649589  259397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:37:51.649739  259397 out.go:298] Setting JSON to false
	I0729 12:37:51.649764  259397 mustload.go:65] Loading cluster: ha-767488
	I0729 12:37:51.649875  259397 notify.go:220] Checking for updates...
	I0729 12:37:51.650115  259397 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:37:51.650131  259397 status.go:255] checking status of ha-767488 ...
	I0729 12:37:51.650503  259397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.650543  259397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.667459  259397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33185
	I0729 12:37:51.668035  259397 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.668737  259397 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.668764  259397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.669070  259397 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.669250  259397 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:37:51.670894  259397 status.go:330] ha-767488 host status = "Running" (err=<nil>)
	I0729 12:37:51.670914  259397 host.go:66] Checking if "ha-767488" exists ...
	I0729 12:37:51.671203  259397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.671250  259397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.686309  259397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40415
	I0729 12:37:51.686721  259397 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.687136  259397 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.687167  259397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.687462  259397 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.687637  259397 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:37:51.690249  259397 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:37:51.690721  259397 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:37:51.690748  259397 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:37:51.690865  259397 host.go:66] Checking if "ha-767488" exists ...
	I0729 12:37:51.691263  259397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.691326  259397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.706277  259397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0729 12:37:51.706708  259397 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.707163  259397 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.707184  259397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.707495  259397 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.707708  259397 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:37:51.707902  259397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:37:51.707935  259397 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:37:51.710915  259397 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:37:51.711315  259397 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:37:51.711339  259397 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:37:51.711477  259397 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:37:51.711655  259397 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:37:51.711816  259397 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:37:51.711958  259397 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:37:51.796640  259397 ssh_runner.go:195] Run: systemctl --version
	I0729 12:37:51.802465  259397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:37:51.817818  259397 kubeconfig.go:125] found "ha-767488" server: "https://192.168.39.254:8443"
	I0729 12:37:51.817855  259397 api_server.go:166] Checking apiserver status ...
	I0729 12:37:51.817890  259397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:37:51.833030  259397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4021/cgroup
	W0729 12:37:51.843199  259397 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4021/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 12:37:51.843255  259397 ssh_runner.go:195] Run: ls
	I0729 12:37:51.848657  259397 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 12:37:51.852919  259397 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 12:37:51.852944  259397 status.go:422] ha-767488 apiserver status = Running (err=<nil>)
	I0729 12:37:51.852962  259397 status.go:257] ha-767488 status: &{Name:ha-767488 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:37:51.853002  259397 status.go:255] checking status of ha-767488-m02 ...
	I0729 12:37:51.853279  259397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.853310  259397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.868701  259397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0729 12:37:51.869066  259397 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.869590  259397 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.869609  259397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.869939  259397 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.870126  259397 main.go:141] libmachine: (ha-767488-m02) Calling .GetState
	I0729 12:37:51.871754  259397 status.go:330] ha-767488-m02 host status = "Running" (err=<nil>)
	I0729 12:37:51.871769  259397 host.go:66] Checking if "ha-767488-m02" exists ...
	I0729 12:37:51.872030  259397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.872067  259397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.886368  259397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32989
	I0729 12:37:51.886795  259397 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.887236  259397 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.887254  259397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.887522  259397 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.887689  259397 main.go:141] libmachine: (ha-767488-m02) Calling .GetIP
	I0729 12:37:51.890377  259397 main.go:141] libmachine: (ha-767488-m02) DBG | domain ha-767488-m02 has defined MAC address 52:54:00:2e:48:8f in network mk-ha-767488
	I0729 12:37:51.890805  259397 main.go:141] libmachine: (ha-767488-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:48:8f", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:25:48 +0000 UTC Type:0 Mac:52:54:00:2e:48:8f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-767488-m02 Clientid:01:52:54:00:2e:48:8f}
	I0729 12:37:51.890833  259397 main.go:141] libmachine: (ha-767488-m02) DBG | domain ha-767488-m02 has defined IP address 192.168.39.45 and MAC address 52:54:00:2e:48:8f in network mk-ha-767488
	I0729 12:37:51.890984  259397 host.go:66] Checking if "ha-767488-m02" exists ...
	I0729 12:37:51.891269  259397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:51.891301  259397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:51.906534  259397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0729 12:37:51.906936  259397 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:51.907364  259397 main.go:141] libmachine: Using API Version  1
	I0729 12:37:51.907387  259397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:51.907675  259397 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:51.907859  259397 main.go:141] libmachine: (ha-767488-m02) Calling .DriverName
	I0729 12:37:51.908061  259397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:37:51.908084  259397 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHHostname
	I0729 12:37:51.910592  259397 main.go:141] libmachine: (ha-767488-m02) DBG | domain ha-767488-m02 has defined MAC address 52:54:00:2e:48:8f in network mk-ha-767488
	I0729 12:37:51.911035  259397 main.go:141] libmachine: (ha-767488-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:48:8f", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:25:48 +0000 UTC Type:0 Mac:52:54:00:2e:48:8f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-767488-m02 Clientid:01:52:54:00:2e:48:8f}
	I0729 12:37:51.911069  259397 main.go:141] libmachine: (ha-767488-m02) DBG | domain ha-767488-m02 has defined IP address 192.168.39.45 and MAC address 52:54:00:2e:48:8f in network mk-ha-767488
	I0729 12:37:51.911151  259397 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHPort
	I0729 12:37:51.911305  259397 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHKeyPath
	I0729 12:37:51.911448  259397 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHUsername
	I0729 12:37:51.911583  259397 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488-m02/id_rsa Username:docker}
	I0729 12:37:51.992358  259397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:37:52.006861  259397 kubeconfig.go:125] found "ha-767488" server: "https://192.168.39.254:8443"
	I0729 12:37:52.006889  259397 api_server.go:166] Checking apiserver status ...
	I0729 12:37:52.006921  259397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0729 12:37:52.019600  259397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0729 12:37:52.019625  259397 status.go:422] ha-767488-m02 apiserver status = Running (err=<nil>)
	I0729 12:37:52.019633  259397 status.go:257] ha-767488-m02 status: &{Name:ha-767488-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:37:52.019650  259397 status.go:255] checking status of ha-767488-m03 ...
	I0729 12:37:52.019990  259397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:52.020031  259397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:52.035114  259397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0729 12:37:52.035534  259397 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:52.035987  259397 main.go:141] libmachine: Using API Version  1
	I0729 12:37:52.036012  259397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:52.036294  259397 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:52.036479  259397 main.go:141] libmachine: (ha-767488-m03) Calling .GetState
	I0729 12:37:52.037890  259397 status.go:330] ha-767488-m03 host status = "Stopped" (err=<nil>)
	I0729 12:37:52.037904  259397 status.go:343] host is not running, skipping remaining checks
	I0729 12:37:52.037912  259397 status.go:257] ha-767488-m03 status: &{Name:ha-767488-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:37:52.037944  259397 status.go:255] checking status of ha-767488-m04 ...
	I0729 12:37:52.038234  259397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:52.038277  259397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:52.054566  259397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35247
	I0729 12:37:52.054949  259397 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:52.055506  259397 main.go:141] libmachine: Using API Version  1
	I0729 12:37:52.055529  259397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:52.055829  259397 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:52.056065  259397 main.go:141] libmachine: (ha-767488-m04) Calling .GetState
	I0729 12:37:52.057547  259397 status.go:330] ha-767488-m04 host status = "Stopped" (err=<nil>)
	I0729 12:37:52.057562  259397 status.go:343] host is not running, skipping remaining checks
	I0729 12:37:52.057570  259397 status.go:257] ha-767488-m04 status: &{Name:ha-767488-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-767488 -n ha-767488
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 logs -n 25: (1.69544007s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m04 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp testdata/cp-test.txt                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488:/home/docker/cp-test_ha-767488-m04_ha-767488.txt                       |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488 sudo cat                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488.txt                                 |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m02:/home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03:/home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m03 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-767488 node stop m02 -v=7                                                     | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-767488 node start m02 -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488 -v=7                                                           | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-767488 -v=7                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	| node    | ha-767488 node delete m03 -v=7                                                   | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:28:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:28:29.213184  257176 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:28:29.213435  257176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:28:29.213444  257176 out.go:304] Setting ErrFile to fd 2...
	I0729 12:28:29.213448  257176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:28:29.213604  257176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:28:29.214122  257176 out.go:298] Setting JSON to false
	I0729 12:28:29.215063  257176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7852,"bootTime":1722248257,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:28:29.215118  257176 start.go:139] virtualization: kvm guest
	I0729 12:28:29.217142  257176 out.go:177] * [ha-767488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:28:29.218351  257176 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:28:29.218358  257176 notify.go:220] Checking for updates...
	I0729 12:28:29.220405  257176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:28:29.221684  257176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:28:29.222900  257176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:28:29.224025  257176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:28:29.225157  257176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:28:29.226709  257176 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:28:29.226808  257176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:28:29.227211  257176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:28:29.227254  257176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:28:29.242929  257176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0729 12:28:29.243340  257176 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:28:29.243859  257176 main.go:141] libmachine: Using API Version  1
	I0729 12:28:29.243878  257176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:28:29.244194  257176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:28:29.244404  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.277920  257176 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:28:29.279142  257176 start.go:297] selected driver: kvm2
	I0729 12:28:29.279164  257176 start.go:901] validating driver "kvm2" against &{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:28:29.279323  257176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:28:29.279655  257176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:28:29.279742  257176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:28:29.294785  257176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:28:29.295450  257176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:28:29.295597  257176 cni.go:84] Creating CNI manager for ""
	I0729 12:28:29.295609  257176 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:28:29.295668  257176 start.go:340] cluster config:
	{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:28:29.295787  257176 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:28:29.297555  257176 out.go:177] * Starting "ha-767488" primary control-plane node in "ha-767488" cluster
	I0729 12:28:29.298735  257176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:28:29.298761  257176 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:28:29.298770  257176 cache.go:56] Caching tarball of preloaded images
	I0729 12:28:29.298837  257176 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:28:29.298847  257176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:28:29.298958  257176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:28:29.299164  257176 start.go:360] acquireMachinesLock for ha-767488: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:28:29.299217  257176 start.go:364] duration metric: took 29.143µs to acquireMachinesLock for "ha-767488"
	I0729 12:28:29.299236  257176 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:28:29.299241  257176 fix.go:54] fixHost starting: 
	I0729 12:28:29.299513  257176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:28:29.299545  257176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:28:29.313514  257176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
	I0729 12:28:29.313936  257176 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:28:29.314395  257176 main.go:141] libmachine: Using API Version  1
	I0729 12:28:29.314416  257176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:28:29.314828  257176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:28:29.315041  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.315199  257176 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:28:29.316538  257176 fix.go:112] recreateIfNeeded on ha-767488: state=Running err=<nil>
	W0729 12:28:29.316562  257176 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:28:29.318256  257176 out.go:177] * Updating the running kvm2 "ha-767488" VM ...
	I0729 12:28:29.319254  257176 machine.go:94] provisionDockerMachine start ...
	I0729 12:28:29.319272  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.319461  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.321717  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.322169  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.322198  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.322326  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.322496  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.322637  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.322767  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.322944  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.323131  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.323141  257176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:28:29.438235  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:28:29.438263  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.438523  257176 buildroot.go:166] provisioning hostname "ha-767488"
	I0729 12:28:29.438557  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.438793  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.441520  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.441975  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.442000  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.442119  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.442319  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.442466  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.442624  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.442834  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.443017  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.443028  257176 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767488 && echo "ha-767488" | sudo tee /etc/hostname
	I0729 12:28:29.574562  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:28:29.574598  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.577319  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.577768  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.577796  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.577984  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.578163  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.578349  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.578522  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.578697  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.578860  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.578875  257176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767488/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:28:29.694293  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:28:29.694324  257176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:28:29.694371  257176 buildroot.go:174] setting up certificates
	I0729 12:28:29.694382  257176 provision.go:84] configureAuth start
	I0729 12:28:29.694404  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.694702  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:28:29.697510  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.697893  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.697924  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.698075  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.700392  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.700707  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.700736  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.700956  257176 provision.go:143] copyHostCerts
	I0729 12:28:29.700988  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:28:29.701018  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 12:28:29.701026  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:28:29.701092  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:28:29.701180  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:28:29.701196  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 12:28:29.701203  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:28:29.701232  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:28:29.701337  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:28:29.701356  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 12:28:29.701363  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:28:29.701386  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:28:29.701443  257176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.ha-767488 san=[127.0.0.1 192.168.39.217 ha-767488 localhost minikube]
	I0729 12:28:29.865634  257176 provision.go:177] copyRemoteCerts
	I0729 12:28:29.865706  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:28:29.865737  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.868239  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.868633  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.868668  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.868894  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.869091  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.869258  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.869404  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:28:29.954969  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:28:29.955070  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 12:28:29.983588  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:28:29.983664  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:28:30.008507  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:28:30.008564  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 12:28:30.033341  257176 provision.go:87] duration metric: took 338.942174ms to configureAuth
	I0729 12:28:30.033370  257176 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:28:30.033650  257176 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:28:30.033738  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:30.036595  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:30.037005  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:30.037034  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:30.037194  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:30.037406  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:30.037590  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:30.037757  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:30.037917  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:30.038088  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:30.038102  257176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:30:00.889607  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:30:00.889647  257176 machine.go:97] duration metric: took 1m31.570380134s to provisionDockerMachine
	I0729 12:30:00.889661  257176 start.go:293] postStartSetup for "ha-767488" (driver="kvm2")
	I0729 12:30:00.889671  257176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:30:00.889688  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:00.890061  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:30:00.890101  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:00.893255  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:00.893756  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:00.893776  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:00.893964  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:00.894195  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:00.894355  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:00.894488  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:00.985670  257176 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:30:00.990118  257176 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:30:00.990148  257176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:30:00.990216  257176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:30:00.990282  257176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 12:30:00.990293  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 12:30:00.990393  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:30:01.000194  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:30:01.026191  257176 start.go:296] duration metric: took 136.51077ms for postStartSetup
	I0729 12:30:01.026247  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.026593  257176 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 12:30:01.026621  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.029199  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.029572  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.029595  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.029738  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.029944  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.030081  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.030227  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	W0729 12:30:01.115131  257176 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 12:30:01.115161  257176 fix.go:56] duration metric: took 1m31.815919439s for fixHost
	I0729 12:30:01.115184  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.117586  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.117880  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.117908  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.118141  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.118375  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.118566  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.118718  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.118901  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:30:01.119139  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:30:01.119158  257176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:30:01.229703  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722256201.208888269
	
	I0729 12:30:01.229730  257176 fix.go:216] guest clock: 1722256201.208888269
	I0729 12:30:01.229740  257176 fix.go:229] Guest: 2024-07-29 12:30:01.208888269 +0000 UTC Remote: 2024-07-29 12:30:01.115168505 +0000 UTC m=+91.939593395 (delta=93.719764ms)
	I0729 12:30:01.229788  257176 fix.go:200] guest clock delta is within tolerance: 93.719764ms
	I0729 12:30:01.229811  257176 start.go:83] releasing machines lock for "ha-767488", held for 1m31.930567231s
	I0729 12:30:01.229843  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.230107  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:30:01.232737  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.233111  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.233145  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.233363  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.233889  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.234111  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.234230  257176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:30:01.234695  257176 ssh_runner.go:195] Run: cat /version.json
	I0729 12:30:01.234732  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.234779  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.238055  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238191  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238449  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.238476  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238583  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.238695  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.238714  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238744  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.238859  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.238932  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.239053  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.239125  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:01.239217  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.239383  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:01.342923  257176 ssh_runner.go:195] Run: systemctl --version
	I0729 12:30:01.349719  257176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:30:01.510709  257176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:30:01.520723  257176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:30:01.520829  257176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:30:01.530564  257176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:30:01.530598  257176 start.go:495] detecting cgroup driver to use...
	I0729 12:30:01.530671  257176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:30:01.547174  257176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:30:01.561910  257176 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:30:01.561979  257176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:30:01.585740  257176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:30:01.618564  257176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:30:01.783506  257176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:30:01.940620  257176 docker.go:233] disabling docker service ...
	I0729 12:30:01.940698  257176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:30:01.959815  257176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:30:01.974713  257176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:30:02.128949  257176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:30:02.297303  257176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:30:02.311979  257176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:30:02.332382  257176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:30:02.332459  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.344118  257176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:30:02.344185  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.355791  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.367033  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.377875  257176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:30:02.389970  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.401378  257176 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.413069  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.423934  257176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:30:02.433485  257176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:30:02.443209  257176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:30:02.597078  257176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:30:06.946792  257176 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.349677004s)
	I0729 12:30:06.946822  257176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:30:06.946866  257176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:30:06.951885  257176 start.go:563] Will wait 60s for crictl version
	I0729 12:30:06.951947  257176 ssh_runner.go:195] Run: which crictl
	I0729 12:30:06.955891  257176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:30:06.996933  257176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:30:06.997009  257176 ssh_runner.go:195] Run: crio --version
	I0729 12:30:07.029517  257176 ssh_runner.go:195] Run: crio --version
	I0729 12:30:07.067863  257176 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:30:07.069386  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:30:07.072261  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:07.072653  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:07.072677  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:07.072963  257176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:30:07.077985  257176 kubeadm.go:883] updating cluster {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:30:07.078159  257176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:30:07.078210  257176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:30:07.131360  257176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:30:07.131380  257176 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:30:07.131434  257176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:30:07.166976  257176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:30:07.167006  257176 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:30:07.167019  257176 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0729 12:30:07.167163  257176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-767488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:30:07.167263  257176 ssh_runner.go:195] Run: crio config
	I0729 12:30:07.218394  257176 cni.go:84] Creating CNI manager for ""
	I0729 12:30:07.218416  257176 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:30:07.218425  257176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:30:07.218446  257176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-767488 NodeName:ha-767488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:30:07.218636  257176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-767488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:30:07.218660  257176 kube-vip.go:115] generating kube-vip config ...
	I0729 12:30:07.218715  257176 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 12:30:07.231281  257176 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 12:30:07.231382  257176 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 12:30:07.231469  257176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:30:07.241143  257176 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:30:07.241203  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 12:30:07.251296  257176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 12:30:07.268752  257176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:30:07.286269  257176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 12:30:07.306290  257176 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 12:30:07.325270  257176 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 12:30:07.330227  257176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:30:07.480445  257176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:30:07.495284  257176 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488 for IP: 192.168.39.217
	I0729 12:30:07.495312  257176 certs.go:194] generating shared ca certs ...
	I0729 12:30:07.495334  257176 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.495514  257176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:30:07.495585  257176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:30:07.495600  257176 certs.go:256] generating profile certs ...
	I0729 12:30:07.495692  257176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/client.key
	I0729 12:30:07.495719  257176 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293
	I0729 12:30:07.495734  257176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.45 192.168.39.210 192.168.39.254]
	I0729 12:30:07.554302  257176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 ...
	I0729 12:30:07.554335  257176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293: {Name:mkc55706e98723442a7209c78a851c6aeec63640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.554502  257176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293 ...
	I0729 12:30:07.554512  257176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293: {Name:mkd6b648aa8c639f0f8174c6258aa3c28a419e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.554579  257176 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt
	I0729 12:30:07.554733  257176 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key
	I0729 12:30:07.554863  257176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key
	I0729 12:30:07.554878  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:30:07.554890  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:30:07.554905  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:30:07.554917  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:30:07.554930  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:30:07.554942  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:30:07.554954  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:30:07.554966  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:30:07.555012  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 12:30:07.555038  257176 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 12:30:07.555053  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:30:07.555074  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:30:07.555094  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:30:07.555113  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:30:07.555149  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:30:07.555175  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 12:30:07.555188  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:07.555200  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 12:30:07.555742  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:30:07.581960  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:30:07.606534  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:30:07.651322  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:30:07.734079  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 12:30:07.843422  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:30:07.919383  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:30:08.009302  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:30:08.114819  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 12:30:08.177084  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:30:08.323565  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 12:30:08.418339  257176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:30:08.452890  257176 ssh_runner.go:195] Run: openssl version
	I0729 12:30:08.463083  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 12:30:08.481125  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.488340  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.488407  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.496532  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 12:30:08.512456  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 12:30:08.528227  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.535939  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.536020  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.542124  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:30:08.556827  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:30:08.570963  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.578024  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.578072  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.583957  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:30:08.599010  257176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:30:08.609458  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:30:08.622965  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:30:08.645142  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:30:08.661889  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:30:08.733013  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:30:08.752828  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:30:08.763265  257176 kubeadm.go:392] StartCluster: {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:30:08.763447  257176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:30:08.763516  257176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:30:08.826291  257176 cri.go:89] found id: "a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad"
	I0729 12:30:08.826316  257176 cri.go:89] found id: "5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887"
	I0729 12:30:08.826319  257176 cri.go:89] found id: "f39e050cd5cc4b05a81e93b2261e728d2c07bc7c1daa3162edfde11e82a4620c"
	I0729 12:30:08.826323  257176 cri.go:89] found id: "5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf"
	I0729 12:30:08.826325  257176 cri.go:89] found id: "a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27"
	I0729 12:30:08.826329  257176 cri.go:89] found id: "c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d"
	I0729 12:30:08.826331  257176 cri.go:89] found id: "ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0"
	I0729 12:30:08.826334  257176 cri.go:89] found id: "e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316"
	I0729 12:30:08.826336  257176 cri.go:89] found id: "a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1"
	I0729 12:30:08.826341  257176 cri.go:89] found id: "14bf682e420cb00f83e39a018ac3723f16ed71fccee45180d30073e87b224475"
	I0729 12:30:08.826343  257176 cri.go:89] found id: "f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb"
	I0729 12:30:08.826345  257176 cri.go:89] found id: "dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a"
	I0729 12:30:08.826348  257176 cri.go:89] found id: "d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91"
	I0729 12:30:08.826351  257176 cri.go:89] found id: "70136b17c65dd39a4d8ff8ecf6e4c4229432e46ce9fcbae7271cb05229ee641d"
	I0729 12:30:08.826356  257176 cri.go:89] found id: ""
	I0729 12:30:08.826397  257176 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.674421643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256672674345086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e132688-8ee3-454a-b1dc-17aabbc44949 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.674984521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a6483da-fda0-4ac3-a8e4-94aa61d5ebf1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.675047474Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a6483da-fda0-4ac3-a8e4-94aa61d5ebf1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.675517812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a6483da-fda0-4ac3-a8e4-94aa61d5ebf1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.720343040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74e53589-6607-43d2-800d-2e22a0e8caef name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.720431815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74e53589-6607-43d2-800d-2e22a0e8caef name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.721519185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9b4ebd0-907e-4dfb-8bab-324d61ff6021 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.722047741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256672722021072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9b4ebd0-907e-4dfb-8bab-324d61ff6021 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.722619579Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=819f2d89-689f-4645-8605-545598e95561 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.722715313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=819f2d89-689f-4645-8605-545598e95561 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.723196407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=819f2d89-689f-4645-8605-545598e95561 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.765738740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09006d69-9a2a-4cce-a6c8-9cf12144e9e4 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.765867268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09006d69-9a2a-4cce-a6c8-9cf12144e9e4 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.766965866Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51907f84-f980-4b8b-af9c-2f99c9bdf59d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.767395108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256672767373311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51907f84-f980-4b8b-af9c-2f99c9bdf59d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.768105476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e68118d7-c9a6-4ab9-87db-f0fa2fd517a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.768174094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e68118d7-c9a6-4ab9-87db-f0fa2fd517a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.768710106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e68118d7-c9a6-4ab9-87db-f0fa2fd517a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.815879059Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6209d0b1-c7ed-4054-8a84-912838eb91ef name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.815994136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6209d0b1-c7ed-4054-8a84-912838eb91ef name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.828285016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1be6a78-1b92-42d6-9107-4c96fc75f53d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.829194328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256672829168834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1be6a78-1b92-42d6-9107-4c96fc75f53d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.829869643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99365589-d210-4d56-98c5-f8fb5579ce0a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.829953238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99365589-d210-4d56-98c5-f8fb5579ce0a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:52 ha-767488 crio[3370]: time="2024-07-29 12:37:52.830341596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99365589-d210-4d56-98c5-f8fb5579ce0a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66eeaa3de5dde       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Running             kube-controller-manager   4                   77ae4bca5cb19       kube-controller-manager-ha-767488
	149dfcffe55a7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Exited              kube-controller-manager   3                   77ae4bca5cb19       kube-controller-manager-ha-767488
	3f1e978a01d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      7 minutes ago       Running             busybox                   1                   6ff1b7f6ad731       busybox-fc5497c4f-4ppv4
	cbbea78e99e72       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      7 minutes ago       Running             busybox                   1                   a7dc5254878c7       busybox-fc5497c4f-trgfp
	7ffae0e726786       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      7 minutes ago       Running             kube-vip                  0                   4ac1d50b066bb       kube-vip-ha-767488
	d899a73918641       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   1                   464e80f1474da       coredns-7db6d8ff4d-k6r5l
	88ec5aa0ed7ec       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                1                   4e921577c4923       kube-proxy-sqk96
	45379775c471b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   1                   6fd6fea36e81f       coredns-7db6d8ff4d-qqt5t
	76b855b3ad75b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       1                   3b6ba7ca06eb5       storage-provisioner
	547d6699a30a2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            1                   874397bc99826       kube-apiserver-ha-767488
	a327747c60c54       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      7 minutes ago       Running             kindnet-cni               1                   ebff2bebd5529       kindnet-6x56p
	5e886bb5a4a2e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            1                   4d030101f0f82       kube-scheduler-ha-767488
	5c8cded716df9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      1                   c38a2d43be153       etcd-ha-767488
	79b136a6e0ea0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   93f5e8a8985f2       busybox-fc5497c4f-trgfp
	3f7b67549d5f7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   841baabfcb1b9       busybox-fc5497c4f-4ppv4
	a26ce9fba519a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Exited              storage-provisioner       0                   fa4b77fe094c4       storage-provisioner
	c263b16acab21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   c2f0a3db73b36       coredns-7db6d8ff4d-k6r5l
	ed92faf8d1c93       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   721397f12db8c       coredns-7db6d8ff4d-qqt5t
	e2114078a73c1       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   a4aeb6b1329f7       kindnet-6x56p
	a99c50ffbfb28       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   7ba65a0686e20       kube-proxy-sqk96
	f1ea8fbc1b3ff       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   f96272e7bee5b       kube-scheduler-ha-767488
	dab08a0e0f3c1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   489cc61ac2d59       etcd-ha-767488
	d427719357ecf       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      16 minutes ago      Exited              kube-apiserver            0                   d3dacdbbe9ee4       kube-apiserver-ha-767488
	
	
	==> coredns [45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42790 - 48270 "HINFO IN 5378893488737017947.5532814832189282968. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010592186s
	
	
	==> coredns [c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d] <==
	[INFO] 10.244.0.5:56893 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.006105477s
	[INFO] 10.244.2.2:49553 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000142409s
	[INFO] 10.244.0.4:44644 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190983s
	[INFO] 10.244.0.4:50509 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000060342s
	[INFO] 10.244.0.4:50667 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000235509s
	[INFO] 10.244.0.4:44600 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002136398s
	[INFO] 10.244.0.5:59842 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008887594s
	[INFO] 10.244.0.5:50358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114137s
	[INFO] 10.244.2.2:51452 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000291176s
	[INFO] 10.244.2.2:40431 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000264874s
	[INFO] 10.244.2.2:35432 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167749s
	[INFO] 10.244.0.4:53618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068812s
	[INFO] 10.244.0.4:52172 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001711814s
	[INFO] 10.244.0.4:47059 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131595s
	[INFO] 10.244.0.4:39902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147006s
	[INFO] 10.244.0.4:37624 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173908s
	[INFO] 10.244.0.5:52999 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135418s
	[INFO] 10.244.2.2:39192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096392s
	[INFO] 10.244.2.2:47682 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102837s
	[INFO] 10.244.0.4:43135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079564s
	[INFO] 10.244.0.4:54022 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000229955s
	[INFO] 10.244.0.4:49468 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000035685s
	[INFO] 10.244.0.4:56523 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000031196s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36411 - 11278 "HINFO IN 1809215905934978785.4639219165358094612. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008507012s
	
	
	==> coredns [ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0] <==
	[INFO] 10.244.2.2:52575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121686s
	[INFO] 10.244.2.2:60306 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096516s
	[INFO] 10.244.2.2:56750 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001201569s
	[INFO] 10.244.0.4:53864 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076226s
	[INFO] 10.244.0.4:43895 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007603s
	[INFO] 10.244.0.4:49768 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001191618s
	[INFO] 10.244.0.4:36610 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091068s
	[INFO] 10.244.0.5:36533 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152157s
	[INFO] 10.244.0.5:59316 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006399s
	[INFO] 10.244.0.5:59406 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051375s
	[INFO] 10.244.0.5:56054 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054289s
	[INFO] 10.244.2.2:32902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000349565s
	[INFO] 10.244.2.2:56936 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214735s
	[INFO] 10.244.2.2:38037 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076517s
	[INFO] 10.244.2.2:33788 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066283s
	[INFO] 10.244.0.4:46469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080696s
	[INFO] 10.244.0.4:56376 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069276s
	[INFO] 10.244.0.4:41139 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003161s
	[INFO] 10.244.0.5:44822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123194s
	[INFO] 10.244.0.5:43997 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184384s
	[INFO] 10.244.0.5:34612 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094985s
	[INFO] 10.244.2.2:57694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131654s
	[INFO] 10.244.2.2:52834 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009944s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-767488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:37:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-767488
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4910accb98434efca56ff8b39068800c
	  System UUID:                4910accb-9843-4efc-a56f-f8b39068800c
	  Boot ID:                    f538ab8c-89b7-40ce-b82e-7644a867ee15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4ppv4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  default                     busybox-fc5497c4f-trgfp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-k6r5l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-qqt5t             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-767488                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-6x56p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-767488             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-767488    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-sqk96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-767488             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-767488                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m51s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-767488 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Warning  ContainerGCFailed        7m47s (x2 over 8m47s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m30s                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	
	
	Name:               ha-767488-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_22_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:22:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:37:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-767488-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9a3fe2d6456464f8574d4c1d95e4f21
	  System UUID:                d9a3fe2d-6456-464f-8574-d4c1d95e4f21
	  Boot ID:                    9ab58707-555a-4bb6-83c9-2399f8c434d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jjx77                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 etcd-ha-767488-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-l7jpd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-767488-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-767488-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-d9lg8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-767488-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-767488-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5m57s              kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 11m                kubelet          Node ha-767488-m02 has been rebooted, boot id: 9ab58707-555a-4bb6-83c9-2399f8c434d4
	  Normal   RegisteredNode           11m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Warning  ContainerGCFailed        6m58s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m30s              node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	
	
	Name:               ha-767488-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_23_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:23:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:26:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-767488-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ca168b2de41451a82ff59b787c535ad
	  System UUID:                5ca168b2-de41-451a-82ff-59b787c535ad
	  Boot ID:                    8622bb8b-ae59-41f0-afa6-05666f1768af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q6fnx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-767488-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-bz9pp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-767488-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-767488-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-tzj27                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-767488-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-767488-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  RegisteredNode           14m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  NodeNotReady             10m                node-controller  Node ha-767488-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           5m30s              node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	
	
	Name:               ha-767488-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_24_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:24:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:26:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-767488-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 326a5fb51aae42b7b8056fc3c9e53faf
	  System UUID:                326a5fb5-1aae-42b7-b805-6fc3c9e53faf
	  Boot ID:                    a14ff095-d004-41b8-991d-b6ed30b10920
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bgb2n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-2m5gr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-767488-m04 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  NodeNotReady             10m                node-controller  Node ha-767488-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           5m30s              node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	
	
	==> dmesg <==
	[  +0.059198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062192] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.161618] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.141343] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.277698] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.104100] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.675894] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060211] kauditd_printk_skb: 158 callbacks suppressed
	[Jul29 12:21] kauditd_printk_skb: 74 callbacks suppressed
	[  +3.541880] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[ +10.417395] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.584590] kauditd_printk_skb: 34 callbacks suppressed
	[Jul29 12:22] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 12:25] kauditd_printk_skb: 10 callbacks suppressed
	[Jul29 12:30] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +0.152481] systemd-fstab-generator[3296]: Ignoring "noauto" option for root device
	[  +0.201233] systemd-fstab-generator[3310]: Ignoring "noauto" option for root device
	[  +0.141805] systemd-fstab-generator[3322]: Ignoring "noauto" option for root device
	[  +0.319718] systemd-fstab-generator[3350]: Ignoring "noauto" option for root device
	[  +4.887179] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.088800] kauditd_printk_skb: 100 callbacks suppressed
	[  +9.359831] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.040803] kauditd_printk_skb: 30 callbacks suppressed
	[ +16.902160] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.806405] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf] <==
	{"level":"warn","ts":"2024-07-29T12:37:53.197612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.207042Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.211067Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.217027Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.222263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.232723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.238137Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.243062Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.25533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.26602Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.27487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.280405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.284711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.29719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.306274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.314333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.318505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.322297Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.322692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.342091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.350732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.360005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.38863Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.390955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:53.416987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> etcd [dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a] <==
	{"level":"info","ts":"2024-07-29T12:28:30.36201Z","caller":"etcdserver/server.go:1448","msg":"leadership transfer finished","local-member-id":"a09c9983ac28f1fd","old-leader-member-id":"a09c9983ac28f1fd","new-leader-member-id":"30f76e47e42605a5","took":"101.152061ms"}
	{"level":"info","ts":"2024-07-29T12:28:30.362301Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.362459Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362504Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.362589Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362622Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362871Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363054Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363107Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"30f76e47e42605a5","error":"failed to read 30f76e47e42605a5 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T12:28:30.363176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363422Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T12:28:30.363566Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.363616Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.363657Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.3637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.363751Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364622Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364716Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364883Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.370716Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"warn","ts":"2024-07-29T12:28:30.370988Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.45:55490","server-name":"","error":"read tcp 192.168.39.217:2380->192.168.39.45:55490: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:28:30.371604Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.45:55480","server-name":"","error":"set tcp 192.168.39.217:2380: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T12:28:31.371639Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:28:31.37168Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> kernel <==
	 12:37:53 up 17 min,  0 users,  load average: 0.29, 0.34, 0.27
	Linux ha-767488 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad] <==
	I0729 12:37:19.360506       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:37:29.352129       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:37:29.352281       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:37:29.352464       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:37:29.352509       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:37:29.352585       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:37:29.352605       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:37:29.352682       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:37:29.352702       1 main.go:299] handling current node
	I0729 12:37:39.352575       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:37:39.352764       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:37:39.352991       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:37:39.353023       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:37:39.353137       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:37:39.353160       1 main.go:299] handling current node
	I0729 12:37:39.353205       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:37:39.353234       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:37:49.361413       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:37:49.361470       1 main.go:299] handling current node
	I0729 12:37:49.361484       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:37:49.361489       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:37:49.361636       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:37:49.361763       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:37:49.361901       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:37:49.361925       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316] <==
	I0729 12:27:52.596723       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:02.599761       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:02.599920       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:02.600088       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:02.600113       1 main.go:299] handling current node
	I0729 12:28:02.600135       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:02.600152       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:02.600239       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:02.600258       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:12.602416       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:12.602457       1 main.go:299] handling current node
	I0729 12:28:12.602474       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:12.602503       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:12.602642       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:12.602666       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:12.602727       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:12.602746       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:22.595743       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:22.595784       1 main.go:299] handling current node
	I0729 12:28:22.595836       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:22.595843       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:22.596051       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:22.596107       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:22.596246       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:22.596285       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3] <==
	Trace[2068681805]: ---"Objects listed" error:etcdserver: request timed out 13010ms (12:31:52.763)
	Trace[2068681805]: [13.010453158s] [13.010453158s] END
	E0729 12:31:52.763880       1 cacher.go:475] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0729 12:31:52.763899       1 reflector.go:547] storage/cacher.go:/leases: failed to list *coordination.Lease: etcdserver: request timed out
	I0729 12:31:52.763927       1 trace.go:236] Trace[967270758]: "Reflector ListAndWatch" name:storage/cacher.go:/leases (29-Jul-2024 12:31:39.759) (total time: 13004ms):
	Trace[967270758]: ---"Objects listed" error:etcdserver: request timed out 13004ms (12:31:52.763)
	Trace[967270758]: [13.004565083s] [13.004565083s] END
	E0729 12:31:52.763936       1 cacher.go:475] cacher (leases.coordination.k8s.io): unexpected ListAndWatch error: failed to list *coordination.Lease: etcdserver: request timed out; reinitializing...
	W0729 12:31:52.763958       1 reflector.go:547] storage/cacher.go:/csidrivers: failed to list *storage.CSIDriver: etcdserver: request timed out
	I0729 12:31:52.763994       1 trace.go:236] Trace[710917587]: "Reflector ListAndWatch" name:storage/cacher.go:/csidrivers (29-Jul-2024 12:31:39.753) (total time: 13010ms):
	Trace[710917587]: ---"Objects listed" error:etcdserver: request timed out 13010ms (12:31:52.763)
	Trace[710917587]: [13.010204143s] [13.010204143s] END
	E0729 12:31:52.764016       1 cacher.go:475] cacher (csidrivers.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.CSIDriver: etcdserver: request timed out; reinitializing...
	E0729 12:31:53.729170       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: leader changed"}: etcdserver: leader changed
	I0729 12:31:53.729368       1 trace.go:236] Trace[902411417]: "Get" accept:application/json, */*,audit-id:fa59e21f-4667-469a-816a-73d1af07e054,client:192.168.39.1,api-group:,api-version:v1,name:ha-767488-m02,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-767488-m02,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Jul-2024 12:31:46.608) (total time: 7120ms):
	Trace[902411417]: [7.120424258s] [7.120424258s] END
	E0729 12:31:53.729518       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: leader changed"}: etcdserver: leader changed
	E0729 12:31:53.729587       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: leader changed"}: etcdserver: leader changed
	I0729 12:31:53.731004       1 trace.go:236] Trace[967601293]: "Get" accept:application/json, */*,audit-id:e6e55dfb-efc1-46e2-8f8e-bb982027ae68,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Jul-2024 12:31:47.943) (total time: 5787ms):
	Trace[967601293]: [5.787057446s] [5.787057446s] END
	I0729 12:31:53.731377       1 trace.go:236] Trace[1545811589]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:392ea8d0-13cd-4c24-b7ae-e13a5045beef,client:127.0.0.1,api-group:rbac.authorization.k8s.io,api-version:v1,name:system:controller:persistent-volume-binder,subresource:,namespace:,protocol:HTTP/2.0,resource:clusterroles,scope:resource,url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:GET (29-Jul-2024 12:31:45.753) (total time: 7977ms):
	Trace[1545811589]: [7.977997758s] [7.977997758s] END
	E0729 12:31:53.731759       1 storage_rbac.go:232] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder: etcdserver: leader changed
	W0729 12:31:54.518380       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.45]
	W0729 12:32:14.519919       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	
	
	==> kube-apiserver [d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91] <==
	I0729 12:28:30.187767       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.188600       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:28:30.189914       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 12:28:30.189977       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0729 12:28:30.190012       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0729 12:28:30.192079       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.192120       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.192128       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.192141       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.194141       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 12:28:30.194189       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 12:28:30.198902       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0729 12:28:30.204074       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 12:28:30.205442       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 12:28:30.213471       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.213575       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.213659       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214057       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214205       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214277       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214426       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214629       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214728       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214894       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214965       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f] <==
	I0729 12:31:21.148099       1 serving.go:380] Generated self-signed cert in-memory
	I0729 12:31:21.413326       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 12:31:21.413367       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:31:21.414936       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:31:21.415020       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 12:31:21.415139       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 12:31:21.415338       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0729 12:31:31.426553       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b] <==
	I0729 12:32:23.322258       1 shared_informer.go:320] Caches are synced for namespace
	I0729 12:32:23.375869       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 12:32:23.381302       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:32:23.385073       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:32:23.407529       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 12:32:23.421636       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 12:32:23.439354       1 shared_informer.go:320] Caches are synced for taint
	I0729 12:32:23.439677       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 12:32:23.484698       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488"
	I0729 12:32:23.484766       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m03"
	I0729 12:32:23.484846       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m04"
	I0729 12:32:23.484966       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m02"
	I0729 12:32:23.485114       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 12:32:23.525676       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 12:32:23.930267       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:32:23.955995       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:32:23.956031       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 12:37:23.555782       1 taint_eviction.go:113] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-fc5497c4f-q6fnx"
	I0729 12:37:23.577937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.269µs"
	I0729 12:37:23.632551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.059928ms"
	I0729 12:37:23.644732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.12292ms"
	I0729 12:37:23.645012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.899µs"
	I0729 12:37:23.654010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.855µs"
	I0729 12:37:28.528253       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.910657ms"
	I0729 12:37:28.528967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.66µs"
	
	
	==> kube-proxy [88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770] <==
	W0729 12:31:17.310464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311579       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311601       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:31:17.311772       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:17.311782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:26.526651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:26.526903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:29.599349       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:29.599438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:29.599511       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:29.599552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:29.599615       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 12:31:41.887061       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:31:41.886328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:41.887293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:54.176621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:54.176873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:54.177234       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:31:54.179432       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:54.179489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:32:20.696876       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:32:29.397880       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:32:37.296532       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1] <==
	I0729 12:21:17.255773       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:21:17.286934       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	I0729 12:21:17.337677       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:21:17.337727       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:21:17.337746       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:21:17.340517       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:21:17.340710       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:21:17.340741       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:21:17.342294       1 config.go:192] "Starting service config controller"
	I0729 12:21:17.342534       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:21:17.342581       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:21:17.342586       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:21:17.343590       1 config.go:319] "Starting node config controller"
	I0729 12:21:17.343624       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:21:17.443485       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 12:21:17.443697       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:21:17.443586       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887] <==
	W0729 12:30:12.498631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 12:30:12.498658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 12:30:12.498717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 12:30:12.498744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 12:30:12.498839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:30:12.498867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:30:12.498916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:30:12.498942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:30:12.499009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:30:12.499036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:30:12.499077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:30:12.499101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:30:12.499133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 12:30:12.499159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 12:30:12.711214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 12:30:12.711269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 12:30:12.901782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:30:12.901876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:30:13.940936       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:30:13.940995       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:30:14.206492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 12:30:14.206526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:30:14.291769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:30:14.291892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0729 12:30:18.034771       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb] <==
	E0729 12:21:00.957258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:00.966559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:21:00.966602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:21:00.969971       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:21:00.970006       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:21:00.975481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:21:00.975514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:21:00.991207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:21:00.991302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:21:01.043730       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:21:01.043771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:21:01.201334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:21:01.201433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:01.269111       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:21:01.269202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 12:21:01.308519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:21:01.308567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:21:01.484192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:21:01.484242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:01.488207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 12:21:01.488410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 12:21:03.597444       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 12:24:50.794520       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bgb2n\": pod kindnet-bgb2n is already assigned to node \"ha-767488-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bgb2n" node="ha-767488-m04"
	E0729 12:24:50.794710       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bgb2n\": pod kindnet-bgb2n is already assigned to node \"ha-767488-m04\"" pod="kube-system/kindnet-bgb2n"
	E0729 12:28:30.163371       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 12:33:06 ha-767488 kubelet[1381]: E0729 12:33:06.688996    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:33:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:33:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:33:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:33:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:34:06 ha-767488 kubelet[1381]: E0729 12:34:06.683963    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:34:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:34:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:34:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:34:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:35:06 ha-767488 kubelet[1381]: E0729 12:35:06.682665    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:35:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:35:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:35:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:35:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:36:06 ha-767488 kubelet[1381]: E0729 12:36:06.683971    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:36:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:36:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:36:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:36:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:37:06 ha-767488 kubelet[1381]: E0729 12:37:06.685220    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:37:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:37:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:37:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:37:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:37:52.383317  259486 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19341-233093/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-767488 -n ha-767488
helpers_test.go:261: (dbg) Run:  kubectl --context ha-767488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (2.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-767488" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-767488\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-767488\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-767488\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.217\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.45\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.210\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.181\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":f
alse,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"Mo
untIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-767488 -n ha-767488
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 logs -n 25: (1.816062255s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m04 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp testdata/cp-test.txt                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488:/home/docker/cp-test_ha-767488-m04_ha-767488.txt                       |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488 sudo cat                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488.txt                                 |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m02:/home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03:/home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m03 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-767488 node stop m02 -v=7                                                     | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-767488 node start m02 -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488 -v=7                                                           | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-767488 -v=7                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	| node    | ha-767488 node delete m03 -v=7                                                   | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:28:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:28:29.213184  257176 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:28:29.213435  257176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:28:29.213444  257176 out.go:304] Setting ErrFile to fd 2...
	I0729 12:28:29.213448  257176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:28:29.213604  257176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:28:29.214122  257176 out.go:298] Setting JSON to false
	I0729 12:28:29.215063  257176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7852,"bootTime":1722248257,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:28:29.215118  257176 start.go:139] virtualization: kvm guest
	I0729 12:28:29.217142  257176 out.go:177] * [ha-767488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:28:29.218351  257176 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:28:29.218358  257176 notify.go:220] Checking for updates...
	I0729 12:28:29.220405  257176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:28:29.221684  257176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:28:29.222900  257176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:28:29.224025  257176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:28:29.225157  257176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:28:29.226709  257176 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:28:29.226808  257176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:28:29.227211  257176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:28:29.227254  257176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:28:29.242929  257176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0729 12:28:29.243340  257176 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:28:29.243859  257176 main.go:141] libmachine: Using API Version  1
	I0729 12:28:29.243878  257176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:28:29.244194  257176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:28:29.244404  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.277920  257176 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:28:29.279142  257176 start.go:297] selected driver: kvm2
	I0729 12:28:29.279164  257176 start.go:901] validating driver "kvm2" against &{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:28:29.279323  257176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:28:29.279655  257176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:28:29.279742  257176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:28:29.294785  257176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:28:29.295450  257176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:28:29.295597  257176 cni.go:84] Creating CNI manager for ""
	I0729 12:28:29.295609  257176 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:28:29.295668  257176 start.go:340] cluster config:
	{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:28:29.295787  257176 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:28:29.297555  257176 out.go:177] * Starting "ha-767488" primary control-plane node in "ha-767488" cluster
	I0729 12:28:29.298735  257176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:28:29.298761  257176 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:28:29.298770  257176 cache.go:56] Caching tarball of preloaded images
	I0729 12:28:29.298837  257176 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:28:29.298847  257176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:28:29.298958  257176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:28:29.299164  257176 start.go:360] acquireMachinesLock for ha-767488: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:28:29.299217  257176 start.go:364] duration metric: took 29.143µs to acquireMachinesLock for "ha-767488"
	I0729 12:28:29.299236  257176 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:28:29.299241  257176 fix.go:54] fixHost starting: 
	I0729 12:28:29.299513  257176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:28:29.299545  257176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:28:29.313514  257176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
	I0729 12:28:29.313936  257176 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:28:29.314395  257176 main.go:141] libmachine: Using API Version  1
	I0729 12:28:29.314416  257176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:28:29.314828  257176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:28:29.315041  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.315199  257176 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:28:29.316538  257176 fix.go:112] recreateIfNeeded on ha-767488: state=Running err=<nil>
	W0729 12:28:29.316562  257176 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:28:29.318256  257176 out.go:177] * Updating the running kvm2 "ha-767488" VM ...
	I0729 12:28:29.319254  257176 machine.go:94] provisionDockerMachine start ...
	I0729 12:28:29.319272  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.319461  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.321717  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.322169  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.322198  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.322326  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.322496  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.322637  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.322767  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.322944  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.323131  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.323141  257176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:28:29.438235  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:28:29.438263  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.438523  257176 buildroot.go:166] provisioning hostname "ha-767488"
	I0729 12:28:29.438557  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.438793  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.441520  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.441975  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.442000  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.442119  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.442319  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.442466  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.442624  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.442834  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.443017  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.443028  257176 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767488 && echo "ha-767488" | sudo tee /etc/hostname
	I0729 12:28:29.574562  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:28:29.574598  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.577319  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.577768  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.577796  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.577984  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.578163  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.578349  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.578522  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.578697  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.578860  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.578875  257176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767488/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:28:29.694293  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:28:29.694324  257176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:28:29.694371  257176 buildroot.go:174] setting up certificates
	I0729 12:28:29.694382  257176 provision.go:84] configureAuth start
	I0729 12:28:29.694404  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.694702  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:28:29.697510  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.697893  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.697924  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.698075  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.700392  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.700707  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.700736  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.700956  257176 provision.go:143] copyHostCerts
	I0729 12:28:29.700988  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:28:29.701018  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 12:28:29.701026  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:28:29.701092  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:28:29.701180  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:28:29.701196  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 12:28:29.701203  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:28:29.701232  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:28:29.701337  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:28:29.701356  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 12:28:29.701363  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:28:29.701386  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:28:29.701443  257176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.ha-767488 san=[127.0.0.1 192.168.39.217 ha-767488 localhost minikube]
	I0729 12:28:29.865634  257176 provision.go:177] copyRemoteCerts
	I0729 12:28:29.865706  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:28:29.865737  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.868239  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.868633  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.868668  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.868894  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.869091  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.869258  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.869404  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:28:29.954969  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:28:29.955070  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 12:28:29.983588  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:28:29.983664  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:28:30.008507  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:28:30.008564  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 12:28:30.033341  257176 provision.go:87] duration metric: took 338.942174ms to configureAuth
	I0729 12:28:30.033370  257176 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:28:30.033650  257176 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:28:30.033738  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:30.036595  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:30.037005  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:30.037034  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:30.037194  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:30.037406  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:30.037590  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:30.037757  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:30.037917  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:30.038088  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:30.038102  257176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:30:00.889607  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:30:00.889647  257176 machine.go:97] duration metric: took 1m31.570380134s to provisionDockerMachine
	I0729 12:30:00.889661  257176 start.go:293] postStartSetup for "ha-767488" (driver="kvm2")
	I0729 12:30:00.889671  257176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:30:00.889688  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:00.890061  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:30:00.890101  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:00.893255  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:00.893756  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:00.893776  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:00.893964  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:00.894195  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:00.894355  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:00.894488  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:00.985670  257176 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:30:00.990118  257176 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:30:00.990148  257176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:30:00.990216  257176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:30:00.990282  257176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 12:30:00.990293  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 12:30:00.990393  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:30:01.000194  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:30:01.026191  257176 start.go:296] duration metric: took 136.51077ms for postStartSetup
	I0729 12:30:01.026247  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.026593  257176 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 12:30:01.026621  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.029199  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.029572  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.029595  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.029738  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.029944  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.030081  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.030227  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	W0729 12:30:01.115131  257176 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 12:30:01.115161  257176 fix.go:56] duration metric: took 1m31.815919439s for fixHost
	I0729 12:30:01.115184  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.117586  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.117880  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.117908  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.118141  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.118375  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.118566  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.118718  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.118901  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:30:01.119139  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:30:01.119158  257176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:30:01.229703  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722256201.208888269
	
	I0729 12:30:01.229730  257176 fix.go:216] guest clock: 1722256201.208888269
	I0729 12:30:01.229740  257176 fix.go:229] Guest: 2024-07-29 12:30:01.208888269 +0000 UTC Remote: 2024-07-29 12:30:01.115168505 +0000 UTC m=+91.939593395 (delta=93.719764ms)
	I0729 12:30:01.229788  257176 fix.go:200] guest clock delta is within tolerance: 93.719764ms
	I0729 12:30:01.229811  257176 start.go:83] releasing machines lock for "ha-767488", held for 1m31.930567231s
	I0729 12:30:01.229843  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.230107  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:30:01.232737  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.233111  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.233145  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.233363  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.233889  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.234111  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.234230  257176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:30:01.234695  257176 ssh_runner.go:195] Run: cat /version.json
	I0729 12:30:01.234732  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.234779  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.238055  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238191  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238449  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.238476  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238583  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.238695  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.238714  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238744  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.238859  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.238932  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.239053  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.239125  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:01.239217  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.239383  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:01.342923  257176 ssh_runner.go:195] Run: systemctl --version
	I0729 12:30:01.349719  257176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:30:01.510709  257176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:30:01.520723  257176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:30:01.520829  257176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:30:01.530564  257176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:30:01.530598  257176 start.go:495] detecting cgroup driver to use...
	I0729 12:30:01.530671  257176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:30:01.547174  257176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:30:01.561910  257176 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:30:01.561979  257176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:30:01.585740  257176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:30:01.618564  257176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:30:01.783506  257176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:30:01.940620  257176 docker.go:233] disabling docker service ...
	I0729 12:30:01.940698  257176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:30:01.959815  257176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:30:01.974713  257176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:30:02.128949  257176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:30:02.297303  257176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:30:02.311979  257176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:30:02.332382  257176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:30:02.332459  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.344118  257176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:30:02.344185  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.355791  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.367033  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.377875  257176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:30:02.389970  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.401378  257176 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.413069  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.423934  257176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:30:02.433485  257176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:30:02.443209  257176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:30:02.597078  257176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:30:06.946792  257176 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.349677004s)
	I0729 12:30:06.946822  257176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:30:06.946866  257176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:30:06.951885  257176 start.go:563] Will wait 60s for crictl version
	I0729 12:30:06.951947  257176 ssh_runner.go:195] Run: which crictl
	I0729 12:30:06.955891  257176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:30:06.996933  257176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:30:06.997009  257176 ssh_runner.go:195] Run: crio --version
	I0729 12:30:07.029517  257176 ssh_runner.go:195] Run: crio --version
	I0729 12:30:07.067863  257176 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:30:07.069386  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:30:07.072261  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:07.072653  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:07.072677  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:07.072963  257176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:30:07.077985  257176 kubeadm.go:883] updating cluster {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:30:07.078159  257176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:30:07.078210  257176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:30:07.131360  257176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:30:07.131380  257176 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:30:07.131434  257176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:30:07.166976  257176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:30:07.167006  257176 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:30:07.167019  257176 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0729 12:30:07.167163  257176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-767488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:30:07.167263  257176 ssh_runner.go:195] Run: crio config
	I0729 12:30:07.218394  257176 cni.go:84] Creating CNI manager for ""
	I0729 12:30:07.218416  257176 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:30:07.218425  257176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:30:07.218446  257176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-767488 NodeName:ha-767488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:30:07.218636  257176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-767488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:30:07.218660  257176 kube-vip.go:115] generating kube-vip config ...
	I0729 12:30:07.218715  257176 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 12:30:07.231281  257176 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 12:30:07.231382  257176 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 12:30:07.231469  257176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:30:07.241143  257176 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:30:07.241203  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 12:30:07.251296  257176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 12:30:07.268752  257176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:30:07.286269  257176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 12:30:07.306290  257176 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 12:30:07.325270  257176 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 12:30:07.330227  257176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:30:07.480445  257176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:30:07.495284  257176 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488 for IP: 192.168.39.217
	I0729 12:30:07.495312  257176 certs.go:194] generating shared ca certs ...
	I0729 12:30:07.495334  257176 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.495514  257176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:30:07.495585  257176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:30:07.495600  257176 certs.go:256] generating profile certs ...
	I0729 12:30:07.495692  257176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/client.key
	I0729 12:30:07.495719  257176 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293
	I0729 12:30:07.495734  257176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.45 192.168.39.210 192.168.39.254]
	I0729 12:30:07.554302  257176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 ...
	I0729 12:30:07.554335  257176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293: {Name:mkc55706e98723442a7209c78a851c6aeec63640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.554502  257176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293 ...
	I0729 12:30:07.554512  257176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293: {Name:mkd6b648aa8c639f0f8174c6258aa3c28a419e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.554579  257176 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt
	I0729 12:30:07.554733  257176 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key
	I0729 12:30:07.554863  257176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key
	I0729 12:30:07.554878  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:30:07.554890  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:30:07.554905  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:30:07.554917  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:30:07.554930  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:30:07.554942  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:30:07.554954  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:30:07.554966  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:30:07.555012  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 12:30:07.555038  257176 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 12:30:07.555053  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:30:07.555074  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:30:07.555094  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:30:07.555113  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:30:07.555149  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:30:07.555175  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 12:30:07.555188  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:07.555200  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 12:30:07.555742  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:30:07.581960  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:30:07.606534  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:30:07.651322  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:30:07.734079  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 12:30:07.843422  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:30:07.919383  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:30:08.009302  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:30:08.114819  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 12:30:08.177084  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:30:08.323565  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 12:30:08.418339  257176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:30:08.452890  257176 ssh_runner.go:195] Run: openssl version
	I0729 12:30:08.463083  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 12:30:08.481125  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.488340  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.488407  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.496532  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 12:30:08.512456  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 12:30:08.528227  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.535939  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.536020  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.542124  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:30:08.556827  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:30:08.570963  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.578024  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.578072  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.583957  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:30:08.599010  257176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:30:08.609458  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:30:08.622965  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:30:08.645142  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:30:08.661889  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:30:08.733013  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:30:08.752828  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:30:08.763265  257176 kubeadm.go:392] StartCluster: {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:30:08.763447  257176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:30:08.763516  257176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:30:08.826291  257176 cri.go:89] found id: "a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad"
	I0729 12:30:08.826316  257176 cri.go:89] found id: "5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887"
	I0729 12:30:08.826319  257176 cri.go:89] found id: "f39e050cd5cc4b05a81e93b2261e728d2c07bc7c1daa3162edfde11e82a4620c"
	I0729 12:30:08.826323  257176 cri.go:89] found id: "5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf"
	I0729 12:30:08.826325  257176 cri.go:89] found id: "a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27"
	I0729 12:30:08.826329  257176 cri.go:89] found id: "c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d"
	I0729 12:30:08.826331  257176 cri.go:89] found id: "ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0"
	I0729 12:30:08.826334  257176 cri.go:89] found id: "e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316"
	I0729 12:30:08.826336  257176 cri.go:89] found id: "a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1"
	I0729 12:30:08.826341  257176 cri.go:89] found id: "14bf682e420cb00f83e39a018ac3723f16ed71fccee45180d30073e87b224475"
	I0729 12:30:08.826343  257176 cri.go:89] found id: "f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb"
	I0729 12:30:08.826345  257176 cri.go:89] found id: "dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a"
	I0729 12:30:08.826348  257176 cri.go:89] found id: "d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91"
	I0729 12:30:08.826351  257176 cri.go:89] found id: "70136b17c65dd39a4d8ff8ecf6e4c4229432e46ce9fcbae7271cb05229ee641d"
	I0729 12:30:08.826356  257176 cri.go:89] found id: ""
	I0729 12:30:08.826397  257176 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.360579643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256675360550406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90299ad3-85e9-46fa-bcfe-c3082bf9c733 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.361263306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8725d32-567f-4981-95cb-4c9cf7ee2b5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.361320486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8725d32-567f-4981-95cb-4c9cf7ee2b5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.362006064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8725d32-567f-4981-95cb-4c9cf7ee2b5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.410114467Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=982dc20e-1cfe-4626-8b8a-e7408d45408b name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.410190495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=982dc20e-1cfe-4626-8b8a-e7408d45408b name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.411715734Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a04f2968-6aff-4d6e-b555-3e58601db00a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.412219089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256675412194671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a04f2968-6aff-4d6e-b555-3e58601db00a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.412903137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46ac6d0e-e0e4-4f71-820e-ce4b96d22f52 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.412967981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46ac6d0e-e0e4-4f71-820e-ce4b96d22f52 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.413376053Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46ac6d0e-e0e4-4f71-820e-ce4b96d22f52 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.460976738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b311324f-850b-4e3b-9bf7-daec86dece2a name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.461070057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b311324f-850b-4e3b-9bf7-daec86dece2a name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.462428230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0112240-5f9d-4f4a-9cfe-dd4aaf699d9b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.463226078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256675463188533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0112240-5f9d-4f4a-9cfe-dd4aaf699d9b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.463964360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=324761c9-d2ff-45cc-9ba3-e0f5f66a6806 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.464043137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=324761c9-d2ff-45cc-9ba3-e0f5f66a6806 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.464583607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=324761c9-d2ff-45cc-9ba3-e0f5f66a6806 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.507559986Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09121da9-4759-41c4-a119-ab138ee31b51 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.507636645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09121da9-4759-41c4-a119-ab138ee31b51 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.508564723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fcaefd6f-0094-4673-869b-c332da3590a3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.509062323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256675509036333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcaefd6f-0094-4673-869b-c332da3590a3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.509546166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e60ef52d-7e3c-477a-b4f6-7f99483a16e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.509631565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e60ef52d-7e3c-477a-b4f6-7f99483a16e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:37:55 ha-767488 crio[3370]: time="2024-07-29 12:37:55.510133438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash:
e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722256210732781099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-7674
88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Anno
tations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annota
tions:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27,PodSandboxId:fa4b77fe094c4286533680c3765568c49f1eeaab19e1a8511beef26cbaf7c0df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255693574128866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container
.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernet
es.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858
c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91,PodSandboxId:d3dacdbbe9ee46f6e663d697bafb99d730a5fd1edd6498faf5556770c980b632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da5
56f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255657393910994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_EXITED,CreatedAt:1722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e60ef52d-7e3c-477a-b4f6-7f99483a16e5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66eeaa3de5dde       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Running             kube-controller-manager   4                   77ae4bca5cb19       kube-controller-manager-ha-767488
	149dfcffe55a7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Exited              kube-controller-manager   3                   77ae4bca5cb19       kube-controller-manager-ha-767488
	3f1e978a01d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      7 minutes ago       Running             busybox                   1                   6ff1b7f6ad731       busybox-fc5497c4f-4ppv4
	cbbea78e99e72       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      7 minutes ago       Running             busybox                   1                   a7dc5254878c7       busybox-fc5497c4f-trgfp
	7ffae0e726786       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      7 minutes ago       Running             kube-vip                  0                   4ac1d50b066bb       kube-vip-ha-767488
	d899a73918641       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   1                   464e80f1474da       coredns-7db6d8ff4d-k6r5l
	88ec5aa0ed7ec       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                1                   4e921577c4923       kube-proxy-sqk96
	45379775c471b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   1                   6fd6fea36e81f       coredns-7db6d8ff4d-qqt5t
	76b855b3ad75b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       1                   3b6ba7ca06eb5       storage-provisioner
	547d6699a30a2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            1                   874397bc99826       kube-apiserver-ha-767488
	a327747c60c54       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      7 minutes ago       Running             kindnet-cni               1                   ebff2bebd5529       kindnet-6x56p
	5e886bb5a4a2e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            1                   4d030101f0f82       kube-scheduler-ha-767488
	5c8cded716df9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      1                   c38a2d43be153       etcd-ha-767488
	79b136a6e0ea0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   93f5e8a8985f2       busybox-fc5497c4f-trgfp
	3f7b67549d5f7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   841baabfcb1b9       busybox-fc5497c4f-4ppv4
	a26ce9fba519a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Exited              storage-provisioner       0                   fa4b77fe094c4       storage-provisioner
	c263b16acab21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   c2f0a3db73b36       coredns-7db6d8ff4d-k6r5l
	ed92faf8d1c93       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   721397f12db8c       coredns-7db6d8ff4d-qqt5t
	e2114078a73c1       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   a4aeb6b1329f7       kindnet-6x56p
	a99c50ffbfb28       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   7ba65a0686e20       kube-proxy-sqk96
	f1ea8fbc1b3ff       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   f96272e7bee5b       kube-scheduler-ha-767488
	dab08a0e0f3c1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   489cc61ac2d59       etcd-ha-767488
	d427719357ecf       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      16 minutes ago      Exited              kube-apiserver            0                   d3dacdbbe9ee4       kube-apiserver-ha-767488
	
	
	==> coredns [45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42790 - 48270 "HINFO IN 5378893488737017947.5532814832189282968. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010592186s
	
	
	==> coredns [c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d] <==
	[INFO] 10.244.0.5:56893 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.006105477s
	[INFO] 10.244.2.2:49553 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000142409s
	[INFO] 10.244.0.4:44644 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190983s
	[INFO] 10.244.0.4:50509 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000060342s
	[INFO] 10.244.0.4:50667 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000235509s
	[INFO] 10.244.0.4:44600 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002136398s
	[INFO] 10.244.0.5:59842 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008887594s
	[INFO] 10.244.0.5:50358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114137s
	[INFO] 10.244.2.2:51452 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000291176s
	[INFO] 10.244.2.2:40431 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000264874s
	[INFO] 10.244.2.2:35432 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167749s
	[INFO] 10.244.0.4:53618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068812s
	[INFO] 10.244.0.4:52172 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001711814s
	[INFO] 10.244.0.4:47059 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131595s
	[INFO] 10.244.0.4:39902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147006s
	[INFO] 10.244.0.4:37624 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173908s
	[INFO] 10.244.0.5:52999 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135418s
	[INFO] 10.244.2.2:39192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096392s
	[INFO] 10.244.2.2:47682 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102837s
	[INFO] 10.244.0.4:43135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079564s
	[INFO] 10.244.0.4:54022 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000229955s
	[INFO] 10.244.0.4:49468 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000035685s
	[INFO] 10.244.0.4:56523 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000031196s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36411 - 11278 "HINFO IN 1809215905934978785.4639219165358094612. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008507012s
	
	
	==> coredns [ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0] <==
	[INFO] 10.244.2.2:52575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121686s
	[INFO] 10.244.2.2:60306 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096516s
	[INFO] 10.244.2.2:56750 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001201569s
	[INFO] 10.244.0.4:53864 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076226s
	[INFO] 10.244.0.4:43895 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007603s
	[INFO] 10.244.0.4:49768 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001191618s
	[INFO] 10.244.0.4:36610 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091068s
	[INFO] 10.244.0.5:36533 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152157s
	[INFO] 10.244.0.5:59316 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006399s
	[INFO] 10.244.0.5:59406 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051375s
	[INFO] 10.244.0.5:56054 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054289s
	[INFO] 10.244.2.2:32902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000349565s
	[INFO] 10.244.2.2:56936 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214735s
	[INFO] 10.244.2.2:38037 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076517s
	[INFO] 10.244.2.2:33788 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066283s
	[INFO] 10.244.0.4:46469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080696s
	[INFO] 10.244.0.4:56376 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069276s
	[INFO] 10.244.0.4:41139 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003161s
	[INFO] 10.244.0.5:44822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123194s
	[INFO] 10.244.0.5:43997 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184384s
	[INFO] 10.244.0.5:34612 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094985s
	[INFO] 10.244.2.2:57694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131654s
	[INFO] 10.244.2.2:52834 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009944s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-767488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:37:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:37:03 +0000   Mon, 29 Jul 2024 12:21:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-767488
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4910accb98434efca56ff8b39068800c
	  System UUID:                4910accb-9843-4efc-a56f-f8b39068800c
	  Boot ID:                    f538ab8c-89b7-40ce-b82e-7644a867ee15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4ppv4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  default                     busybox-fc5497c4f-trgfp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-k6r5l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-qqt5t             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-767488                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-6x56p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-767488             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-767488    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-sqk96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-767488             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-767488                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m53s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-767488 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Warning  ContainerGCFailed        7m49s (x2 over 8m49s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m32s                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	
	
	Name:               ha-767488-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_22_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:22:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:37:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:37:02 +0000   Mon, 29 Jul 2024 12:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-767488-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9a3fe2d6456464f8574d4c1d95e4f21
	  System UUID:                d9a3fe2d-6456-464f-8574-d4c1d95e4f21
	  Boot ID:                    9ab58707-555a-4bb6-83c9-2399f8c434d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jjx77                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 etcd-ha-767488-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-l7jpd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-767488-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-767488-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-d9lg8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-767488-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-767488-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 6m                 kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 12m                kubelet          Node ha-767488-m02 has been rebooted, boot id: 9ab58707-555a-4bb6-83c9-2399f8c434d4
	  Normal   RegisteredNode           11m                node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Warning  ContainerGCFailed        7m1s               kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m33s              node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	
	
	Name:               ha-767488-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_23_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:23:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:26:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 12:24:43 +0000   Mon, 29 Jul 2024 12:27:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-767488-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ca168b2de41451a82ff59b787c535ad
	  System UUID:                5ca168b2-de41-451a-82ff-59b787c535ad
	  Boot ID:                    8622bb8b-ae59-41f0-afa6-05666f1768af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q6fnx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-767488-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-bz9pp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-767488-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-767488-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-tzj27                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-767488-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-767488-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  RegisteredNode           14m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal  NodeNotReady             10m                node-controller  Node ha-767488-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           5m33s              node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	
	
	Name:               ha-767488-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_24_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:24:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:26:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 12:25:21 +0000   Mon, 29 Jul 2024 12:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-767488-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 326a5fb51aae42b7b8056fc3c9e53faf
	  System UUID:                326a5fb5-1aae-42b7-b805-6fc3c9e53faf
	  Boot ID:                    a14ff095-d004-41b8-991d-b6ed30b10920
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bgb2n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-2m5gr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-767488-m04 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal  NodeNotReady             10m                node-controller  Node ha-767488-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           5m33s              node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	
	
	==> dmesg <==
	[  +0.059198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062192] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.161618] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.141343] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.277698] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.104100] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.675894] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060211] kauditd_printk_skb: 158 callbacks suppressed
	[Jul29 12:21] kauditd_printk_skb: 74 callbacks suppressed
	[  +3.541880] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[ +10.417395] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.584590] kauditd_printk_skb: 34 callbacks suppressed
	[Jul29 12:22] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 12:25] kauditd_printk_skb: 10 callbacks suppressed
	[Jul29 12:30] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +0.152481] systemd-fstab-generator[3296]: Ignoring "noauto" option for root device
	[  +0.201233] systemd-fstab-generator[3310]: Ignoring "noauto" option for root device
	[  +0.141805] systemd-fstab-generator[3322]: Ignoring "noauto" option for root device
	[  +0.319718] systemd-fstab-generator[3350]: Ignoring "noauto" option for root device
	[  +4.887179] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.088800] kauditd_printk_skb: 100 callbacks suppressed
	[  +9.359831] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.040803] kauditd_printk_skb: 30 callbacks suppressed
	[ +16.902160] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.806405] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf] <==
	{"level":"warn","ts":"2024-07-29T12:37:55.61769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:55.639946Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:55.914192Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:55.917176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:55.928172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:55.937618Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:55.947704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:55.952675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:55.956487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:55.970911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:55.982712Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.007907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.014488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.01728Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.018973Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.030749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.043256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.052128Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.056678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.060177Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.065512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.073635Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.092175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.114424Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T12:37:56.118906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> etcd [dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a] <==
	{"level":"info","ts":"2024-07-29T12:28:30.36201Z","caller":"etcdserver/server.go:1448","msg":"leadership transfer finished","local-member-id":"a09c9983ac28f1fd","old-leader-member-id":"a09c9983ac28f1fd","new-leader-member-id":"30f76e47e42605a5","took":"101.152061ms"}
	{"level":"info","ts":"2024-07-29T12:28:30.362301Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.362459Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362504Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.362589Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362622Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362871Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363054Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363107Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"30f76e47e42605a5","error":"failed to read 30f76e47e42605a5 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T12:28:30.363176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363422Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T12:28:30.363566Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.363616Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.363657Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.3637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.363751Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364622Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364716Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364883Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.370716Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"warn","ts":"2024-07-29T12:28:30.370988Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.45:55490","server-name":"","error":"read tcp 192.168.39.217:2380->192.168.39.45:55490: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:28:30.371604Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.45:55480","server-name":"","error":"set tcp 192.168.39.217:2380: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T12:28:31.371639Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:28:31.37168Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> kernel <==
	 12:37:56 up 17 min,  0 users,  load average: 0.29, 0.34, 0.27
	Linux ha-767488 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad] <==
	I0729 12:37:19.360506       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:37:29.352129       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:37:29.352281       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:37:29.352464       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:37:29.352509       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:37:29.352585       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:37:29.352605       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:37:29.352682       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:37:29.352702       1 main.go:299] handling current node
	I0729 12:37:39.352575       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:37:39.352764       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:37:39.352991       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:37:39.353023       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:37:39.353137       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:37:39.353160       1 main.go:299] handling current node
	I0729 12:37:39.353205       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:37:39.353234       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:37:49.361413       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:37:49.361470       1 main.go:299] handling current node
	I0729 12:37:49.361484       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:37:49.361489       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:37:49.361636       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:37:49.361763       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:37:49.361901       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:37:49.361925       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316] <==
	I0729 12:27:52.596723       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:02.599761       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:02.599920       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:02.600088       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:02.600113       1 main.go:299] handling current node
	I0729 12:28:02.600135       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:02.600152       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:02.600239       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:02.600258       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:12.602416       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:12.602457       1 main.go:299] handling current node
	I0729 12:28:12.602474       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:12.602503       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:12.602642       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:12.602666       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:12.602727       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:12.602746       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:22.595743       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:22.595784       1 main.go:299] handling current node
	I0729 12:28:22.595836       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:22.595843       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:22.596051       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:22.596107       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:22.596246       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:22.596285       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3] <==
	Trace[2068681805]: ---"Objects listed" error:etcdserver: request timed out 13010ms (12:31:52.763)
	Trace[2068681805]: [13.010453158s] [13.010453158s] END
	E0729 12:31:52.763880       1 cacher.go:475] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0729 12:31:52.763899       1 reflector.go:547] storage/cacher.go:/leases: failed to list *coordination.Lease: etcdserver: request timed out
	I0729 12:31:52.763927       1 trace.go:236] Trace[967270758]: "Reflector ListAndWatch" name:storage/cacher.go:/leases (29-Jul-2024 12:31:39.759) (total time: 13004ms):
	Trace[967270758]: ---"Objects listed" error:etcdserver: request timed out 13004ms (12:31:52.763)
	Trace[967270758]: [13.004565083s] [13.004565083s] END
	E0729 12:31:52.763936       1 cacher.go:475] cacher (leases.coordination.k8s.io): unexpected ListAndWatch error: failed to list *coordination.Lease: etcdserver: request timed out; reinitializing...
	W0729 12:31:52.763958       1 reflector.go:547] storage/cacher.go:/csidrivers: failed to list *storage.CSIDriver: etcdserver: request timed out
	I0729 12:31:52.763994       1 trace.go:236] Trace[710917587]: "Reflector ListAndWatch" name:storage/cacher.go:/csidrivers (29-Jul-2024 12:31:39.753) (total time: 13010ms):
	Trace[710917587]: ---"Objects listed" error:etcdserver: request timed out 13010ms (12:31:52.763)
	Trace[710917587]: [13.010204143s] [13.010204143s] END
	E0729 12:31:52.764016       1 cacher.go:475] cacher (csidrivers.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.CSIDriver: etcdserver: request timed out; reinitializing...
	E0729 12:31:53.729170       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: leader changed"}: etcdserver: leader changed
	I0729 12:31:53.729368       1 trace.go:236] Trace[902411417]: "Get" accept:application/json, */*,audit-id:fa59e21f-4667-469a-816a-73d1af07e054,client:192.168.39.1,api-group:,api-version:v1,name:ha-767488-m02,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-767488-m02,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Jul-2024 12:31:46.608) (total time: 7120ms):
	Trace[902411417]: [7.120424258s] [7.120424258s] END
	E0729 12:31:53.729518       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: leader changed"}: etcdserver: leader changed
	E0729 12:31:53.729587       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: leader changed"}: etcdserver: leader changed
	I0729 12:31:53.731004       1 trace.go:236] Trace[967601293]: "Get" accept:application/json, */*,audit-id:e6e55dfb-efc1-46e2-8f8e-bb982027ae68,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Jul-2024 12:31:47.943) (total time: 5787ms):
	Trace[967601293]: [5.787057446s] [5.787057446s] END
	I0729 12:31:53.731377       1 trace.go:236] Trace[1545811589]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:392ea8d0-13cd-4c24-b7ae-e13a5045beef,client:127.0.0.1,api-group:rbac.authorization.k8s.io,api-version:v1,name:system:controller:persistent-volume-binder,subresource:,namespace:,protocol:HTTP/2.0,resource:clusterroles,scope:resource,url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:GET (29-Jul-2024 12:31:45.753) (total time: 7977ms):
	Trace[1545811589]: [7.977997758s] [7.977997758s] END
	E0729 12:31:53.731759       1 storage_rbac.go:232] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder: etcdserver: leader changed
	W0729 12:31:54.518380       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.45]
	W0729 12:32:14.519919       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	
	
	==> kube-apiserver [d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91] <==
	I0729 12:28:30.187767       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.188600       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:28:30.189914       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 12:28:30.189977       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0729 12:28:30.190012       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0729 12:28:30.192079       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.192120       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.192128       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.192141       1 controller.go:176] quota evaluator worker shutdown
	I0729 12:28:30.194141       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 12:28:30.194189       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 12:28:30.198902       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0729 12:28:30.204074       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 12:28:30.205442       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 12:28:30.213471       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.213575       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.213659       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214057       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214205       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214277       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214426       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214629       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214728       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214894       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:28:30.214965       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f] <==
	I0729 12:31:21.148099       1 serving.go:380] Generated self-signed cert in-memory
	I0729 12:31:21.413326       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 12:31:21.413367       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:31:21.414936       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:31:21.415020       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 12:31:21.415139       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 12:31:21.415338       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0729 12:31:31.426553       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b] <==
	I0729 12:32:23.322258       1 shared_informer.go:320] Caches are synced for namespace
	I0729 12:32:23.375869       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 12:32:23.381302       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:32:23.385073       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:32:23.407529       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 12:32:23.421636       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 12:32:23.439354       1 shared_informer.go:320] Caches are synced for taint
	I0729 12:32:23.439677       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 12:32:23.484698       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488"
	I0729 12:32:23.484766       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m03"
	I0729 12:32:23.484846       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m04"
	I0729 12:32:23.484966       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m02"
	I0729 12:32:23.485114       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 12:32:23.525676       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 12:32:23.930267       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:32:23.955995       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:32:23.956031       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 12:37:23.555782       1 taint_eviction.go:113] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-fc5497c4f-q6fnx"
	I0729 12:37:23.577937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.269µs"
	I0729 12:37:23.632551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.059928ms"
	I0729 12:37:23.644732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.12292ms"
	I0729 12:37:23.645012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.899µs"
	I0729 12:37:23.654010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.855µs"
	I0729 12:37:28.528253       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.910657ms"
	I0729 12:37:28.528967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.66µs"
	
	
	==> kube-proxy [88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770] <==
	W0729 12:31:17.310464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311579       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311601       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:31:17.311772       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:17.311782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:17.311979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:26.526651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:26.526903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:29.599349       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:29.599438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:29.599511       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:29.599552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:29.599615       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 12:31:41.887061       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:31:41.886328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:41.887293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:31:54.176621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:54.176873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:54.177234       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:31:54.179432       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:31:54.179489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:32:20.696876       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:32:29.397880       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:32:37.296532       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1] <==
	I0729 12:21:17.255773       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:21:17.286934       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	I0729 12:21:17.337677       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:21:17.337727       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:21:17.337746       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:21:17.340517       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:21:17.340710       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:21:17.340741       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:21:17.342294       1 config.go:192] "Starting service config controller"
	I0729 12:21:17.342534       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:21:17.342581       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:21:17.342586       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:21:17.343590       1 config.go:319] "Starting node config controller"
	I0729 12:21:17.343624       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:21:17.443485       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 12:21:17.443697       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:21:17.443586       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887] <==
	W0729 12:30:12.498631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 12:30:12.498658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 12:30:12.498717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 12:30:12.498744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 12:30:12.498839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:30:12.498867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:30:12.498916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:30:12.498942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:30:12.499009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:30:12.499036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:30:12.499077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:30:12.499101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:30:12.499133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 12:30:12.499159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 12:30:12.711214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 12:30:12.711269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 12:30:12.901782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:30:12.901876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:30:13.940936       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:30:13.940995       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:30:14.206492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 12:30:14.206526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:30:14.291769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:30:14.291892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0729 12:30:18.034771       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb] <==
	E0729 12:21:00.957258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:00.966559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:21:00.966602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:21:00.969971       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:21:00.970006       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:21:00.975481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:21:00.975514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:21:00.991207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:21:00.991302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:21:01.043730       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:21:01.043771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:21:01.201334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:21:01.201433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:01.269111       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:21:01.269202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 12:21:01.308519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:21:01.308567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:21:01.484192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:21:01.484242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:01.488207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 12:21:01.488410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 12:21:03.597444       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 12:24:50.794520       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bgb2n\": pod kindnet-bgb2n is already assigned to node \"ha-767488-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bgb2n" node="ha-767488-m04"
	E0729 12:24:50.794710       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bgb2n\": pod kindnet-bgb2n is already assigned to node \"ha-767488-m04\"" pod="kube-system/kindnet-bgb2n"
	E0729 12:28:30.163371       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 12:33:06 ha-767488 kubelet[1381]: E0729 12:33:06.688996    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:33:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:33:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:33:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:33:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:34:06 ha-767488 kubelet[1381]: E0729 12:34:06.683963    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:34:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:34:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:34:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:34:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:35:06 ha-767488 kubelet[1381]: E0729 12:35:06.682665    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:35:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:35:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:35:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:35:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:36:06 ha-767488 kubelet[1381]: E0729 12:36:06.683971    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:36:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:36:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:36:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:36:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:37:06 ha-767488 kubelet[1381]: E0729 12:37:06.685220    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:37:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:37:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:37:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:37:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:37:55.065317  259632 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19341-233093/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-767488 -n ha-767488
helpers_test.go:261: (dbg) Run:  kubectl --context ha-767488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (174.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 stop -v=7 --alsologtostderr
E0729 12:39:27.880541  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-767488 stop -v=7 --alsologtostderr: exit status 82 (2m3.024144014s)

                                                
                                                
-- stdout --
	* Stopping node "ha-767488-m04"  ...
	* Stopping node "ha-767488-m03"  ...
	* Stopping node "ha-767488-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:37:57.236217  259713 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:37:57.236440  259713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:37:57.236448  259713 out.go:304] Setting ErrFile to fd 2...
	I0729 12:37:57.236453  259713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:37:57.236626  259713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:37:57.236895  259713 out.go:298] Setting JSON to false
	I0729 12:37:57.236979  259713 mustload.go:65] Loading cluster: ha-767488
	I0729 12:37:57.237305  259713 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:37:57.237413  259713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:37:57.237605  259713 mustload.go:65] Loading cluster: ha-767488
	I0729 12:37:57.237731  259713 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:37:57.237763  259713 stop.go:39] StopHost: ha-767488-m04
	I0729 12:37:57.238114  259713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:57.238155  259713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:57.253817  259713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0729 12:37:57.254300  259713 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:57.254862  259713 main.go:141] libmachine: Using API Version  1
	I0729 12:37:57.254888  259713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:57.255283  259713 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:57.258293  259713 out.go:177] * Stopping node "ha-767488-m04"  ...
	I0729 12:37:57.259650  259713 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 12:37:57.259693  259713 main.go:141] libmachine: (ha-767488-m04) Calling .DriverName
	I0729 12:37:57.259946  259713 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 12:37:57.259970  259713 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHHostname
	I0729 12:37:57.261612  259713 retry.go:31] will retry after 374.858241ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0729 12:37:57.637181  259713 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHHostname
	I0729 12:37:57.638777  259713 retry.go:31] will retry after 333.16836ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0729 12:37:57.972243  259713 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHHostname
	I0729 12:37:57.973900  259713 retry.go:31] will retry after 642.673749ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0729 12:37:58.616697  259713 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHHostname
	W0729 12:37:58.618282  259713 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0729 12:37:58.618317  259713 main.go:141] libmachine: Stopping "ha-767488-m04"...
	I0729 12:37:58.618325  259713 main.go:141] libmachine: (ha-767488-m04) Calling .GetState
	I0729 12:37:58.619611  259713 stop.go:66] stop err: Machine "ha-767488-m04" is already stopped.
	I0729 12:37:58.619630  259713 stop.go:69] host is already stopped
	I0729 12:37:58.619643  259713 stop.go:39] StopHost: ha-767488-m03
	I0729 12:37:58.620013  259713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:58.620061  259713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:58.635380  259713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0729 12:37:58.635781  259713 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:58.636282  259713 main.go:141] libmachine: Using API Version  1
	I0729 12:37:58.636310  259713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:58.636623  259713 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:58.638579  259713 out.go:177] * Stopping node "ha-767488-m03"  ...
	I0729 12:37:58.639975  259713 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 12:37:58.640001  259713 main.go:141] libmachine: (ha-767488-m03) Calling .DriverName
	I0729 12:37:58.640218  259713 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 12:37:58.640238  259713 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHHostname
	I0729 12:37:58.641832  259713 retry.go:31] will retry after 287.341928ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0729 12:37:58.929249  259713 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHHostname
	I0729 12:37:58.930885  259713 retry.go:31] will retry after 232.603078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0729 12:37:59.164339  259713 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHHostname
	I0729 12:37:59.166031  259713 retry.go:31] will retry after 636.796038ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0729 12:37:59.803449  259713 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHHostname
	W0729 12:37:59.805306  259713 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0729 12:37:59.805341  259713 main.go:141] libmachine: Stopping "ha-767488-m03"...
	I0729 12:37:59.805349  259713 main.go:141] libmachine: (ha-767488-m03) Calling .GetState
	I0729 12:37:59.806539  259713 stop.go:66] stop err: Machine "ha-767488-m03" is already stopped.
	I0729 12:37:59.806560  259713 stop.go:69] host is already stopped
	I0729 12:37:59.806571  259713 stop.go:39] StopHost: ha-767488-m02
	I0729 12:37:59.806861  259713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:37:59.806899  259713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:37:59.823248  259713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34837
	I0729 12:37:59.823726  259713 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:37:59.824249  259713 main.go:141] libmachine: Using API Version  1
	I0729 12:37:59.824272  259713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:37:59.824614  259713 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:37:59.826566  259713 out.go:177] * Stopping node "ha-767488-m02"  ...
	I0729 12:37:59.827749  259713 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 12:37:59.827770  259713 main.go:141] libmachine: (ha-767488-m02) Calling .DriverName
	I0729 12:37:59.828006  259713 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 12:37:59.828033  259713 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHHostname
	I0729 12:37:59.830776  259713 main.go:141] libmachine: (ha-767488-m02) DBG | domain ha-767488-m02 has defined MAC address 52:54:00:2e:48:8f in network mk-ha-767488
	I0729 12:37:59.831189  259713 main.go:141] libmachine: (ha-767488-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:48:8f", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:25:48 +0000 UTC Type:0 Mac:52:54:00:2e:48:8f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-767488-m02 Clientid:01:52:54:00:2e:48:8f}
	I0729 12:37:59.831217  259713 main.go:141] libmachine: (ha-767488-m02) DBG | domain ha-767488-m02 has defined IP address 192.168.39.45 and MAC address 52:54:00:2e:48:8f in network mk-ha-767488
	I0729 12:37:59.831367  259713 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHPort
	I0729 12:37:59.831550  259713 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHKeyPath
	I0729 12:37:59.831730  259713 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHUsername
	I0729 12:37:59.831869  259713 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488-m02/id_rsa Username:docker}
	I0729 12:37:59.917096  259713 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 12:37:59.970770  259713 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 12:38:00.024528  259713 main.go:141] libmachine: Stopping "ha-767488-m02"...
	I0729 12:38:00.024559  259713 main.go:141] libmachine: (ha-767488-m02) Calling .GetState
	I0729 12:38:00.026169  259713 main.go:141] libmachine: (ha-767488-m02) Calling .Stop
	I0729 12:38:00.029500  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 0/120
	I0729 12:38:01.031553  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 1/120
	I0729 12:38:02.033116  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 2/120
	I0729 12:38:03.034438  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 3/120
	I0729 12:38:04.035999  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 4/120
	I0729 12:38:05.037787  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 5/120
	I0729 12:38:06.039297  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 6/120
	I0729 12:38:07.040821  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 7/120
	I0729 12:38:08.042191  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 8/120
	I0729 12:38:09.043526  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 9/120
	I0729 12:38:10.045347  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 10/120
	I0729 12:38:11.047087  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 11/120
	I0729 12:38:12.048487  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 12/120
	I0729 12:38:13.049889  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 13/120
	I0729 12:38:14.051196  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 14/120
	I0729 12:38:15.053133  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 15/120
	I0729 12:38:16.055114  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 16/120
	I0729 12:38:17.056655  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 17/120
	I0729 12:38:18.057993  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 18/120
	I0729 12:38:19.059452  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 19/120
	I0729 12:38:20.061360  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 20/120
	I0729 12:38:21.063123  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 21/120
	I0729 12:38:22.064454  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 22/120
	I0729 12:38:23.065691  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 23/120
	I0729 12:38:24.067064  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 24/120
	I0729 12:38:25.068754  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 25/120
	I0729 12:38:26.070498  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 26/120
	I0729 12:38:27.071908  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 27/120
	I0729 12:38:28.073427  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 28/120
	I0729 12:38:29.074868  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 29/120
	I0729 12:38:30.076548  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 30/120
	I0729 12:38:31.078744  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 31/120
	I0729 12:38:32.080405  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 32/120
	I0729 12:38:33.081881  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 33/120
	I0729 12:38:34.083108  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 34/120
	I0729 12:38:35.085008  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 35/120
	I0729 12:38:36.087335  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 36/120
	I0729 12:38:37.088611  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 37/120
	I0729 12:38:38.089858  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 38/120
	I0729 12:38:39.091055  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 39/120
	I0729 12:38:40.092723  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 40/120
	I0729 12:38:41.093926  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 41/120
	I0729 12:38:42.095128  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 42/120
	I0729 12:38:43.096589  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 43/120
	I0729 12:38:44.097943  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 44/120
	I0729 12:38:45.100246  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 45/120
	I0729 12:38:46.101484  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 46/120
	I0729 12:38:47.102742  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 47/120
	I0729 12:38:48.104037  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 48/120
	I0729 12:38:49.105511  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 49/120
	I0729 12:38:50.106851  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 50/120
	I0729 12:38:51.108120  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 51/120
	I0729 12:38:52.109464  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 52/120
	I0729 12:38:53.111464  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 53/120
	I0729 12:38:54.112719  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 54/120
	I0729 12:38:55.114344  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 55/120
	I0729 12:38:56.115591  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 56/120
	I0729 12:38:57.116818  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 57/120
	I0729 12:38:58.118103  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 58/120
	I0729 12:38:59.119448  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 59/120
	I0729 12:39:00.121193  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 60/120
	I0729 12:39:01.123164  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 61/120
	I0729 12:39:02.124547  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 62/120
	I0729 12:39:03.125925  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 63/120
	I0729 12:39:04.127163  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 64/120
	I0729 12:39:05.128896  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 65/120
	I0729 12:39:06.130132  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 66/120
	I0729 12:39:07.131361  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 67/120
	I0729 12:39:08.132692  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 68/120
	I0729 12:39:09.134042  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 69/120
	I0729 12:39:10.135769  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 70/120
	I0729 12:39:11.137216  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 71/120
	I0729 12:39:12.138723  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 72/120
	I0729 12:39:13.140217  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 73/120
	I0729 12:39:14.141538  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 74/120
	I0729 12:39:15.143152  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 75/120
	I0729 12:39:16.144500  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 76/120
	I0729 12:39:17.145771  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 77/120
	I0729 12:39:18.147183  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 78/120
	I0729 12:39:19.148441  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 79/120
	I0729 12:39:20.150038  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 80/120
	I0729 12:39:21.151385  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 81/120
	I0729 12:39:22.152804  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 82/120
	I0729 12:39:23.154397  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 83/120
	I0729 12:39:24.156659  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 84/120
	I0729 12:39:25.158396  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 85/120
	I0729 12:39:26.159870  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 86/120
	I0729 12:39:27.161345  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 87/120
	I0729 12:39:28.162801  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 88/120
	I0729 12:39:29.164054  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 89/120
	I0729 12:39:30.165854  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 90/120
	I0729 12:39:31.167323  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 91/120
	I0729 12:39:32.168704  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 92/120
	I0729 12:39:33.170246  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 93/120
	I0729 12:39:34.171546  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 94/120
	I0729 12:39:35.173311  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 95/120
	I0729 12:39:36.174557  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 96/120
	I0729 12:39:37.175822  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 97/120
	I0729 12:39:38.177130  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 98/120
	I0729 12:39:39.178399  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 99/120
	I0729 12:39:40.180298  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 100/120
	I0729 12:39:41.181670  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 101/120
	I0729 12:39:42.182945  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 102/120
	I0729 12:39:43.184431  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 103/120
	I0729 12:39:44.185738  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 104/120
	I0729 12:39:45.187287  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 105/120
	I0729 12:39:46.188615  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 106/120
	I0729 12:39:47.189916  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 107/120
	I0729 12:39:48.191227  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 108/120
	I0729 12:39:49.192644  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 109/120
	I0729 12:39:50.194097  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 110/120
	I0729 12:39:51.195349  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 111/120
	I0729 12:39:52.196771  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 112/120
	I0729 12:39:53.198479  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 113/120
	I0729 12:39:54.199809  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 114/120
	I0729 12:39:55.202070  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 115/120
	I0729 12:39:56.203497  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 116/120
	I0729 12:39:57.205154  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 117/120
	I0729 12:39:58.207375  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 118/120
	I0729 12:39:59.208723  259713 main.go:141] libmachine: (ha-767488-m02) Waiting for machine to stop 119/120
	I0729 12:40:00.209494  259713 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 12:40:00.209543  259713 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 12:40:00.211512  259713 out.go:177] 
	W0729 12:40:00.212886  259713 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 12:40:00.212910  259713 out.go:239] * 
	* 
	W0729 12:40:00.215489  259713 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 12:40:00.216775  259713 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-767488 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr: exit status 7 (33.866090109s)

                                                
                                                
-- stdout --
	ha-767488
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-767488-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-767488-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767488-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:40:00.262142  260138 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:40:00.262251  260138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:40:00.262256  260138 out.go:304] Setting ErrFile to fd 2...
	I0729 12:40:00.262266  260138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:40:00.262431  260138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:40:00.262615  260138 out.go:298] Setting JSON to false
	I0729 12:40:00.262643  260138 mustload.go:65] Loading cluster: ha-767488
	I0729 12:40:00.262699  260138 notify.go:220] Checking for updates...
	I0729 12:40:00.263021  260138 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:40:00.263035  260138 status.go:255] checking status of ha-767488 ...
	I0729 12:40:00.263396  260138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:00.263453  260138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:00.281634  260138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44817
	I0729 12:40:00.282162  260138 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:00.282774  260138 main.go:141] libmachine: Using API Version  1
	I0729 12:40:00.282795  260138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:00.283198  260138 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:00.283444  260138 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:40:00.285162  260138 status.go:330] ha-767488 host status = "Running" (err=<nil>)
	I0729 12:40:00.285181  260138 host.go:66] Checking if "ha-767488" exists ...
	I0729 12:40:00.285594  260138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:00.285644  260138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:00.300285  260138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42049
	I0729 12:40:00.300771  260138 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:00.301369  260138 main.go:141] libmachine: Using API Version  1
	I0729 12:40:00.301396  260138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:00.301760  260138 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:00.302033  260138 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:40:00.304711  260138 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:00.305111  260138 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:00.305141  260138 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:00.305277  260138 host.go:66] Checking if "ha-767488" exists ...
	I0729 12:40:00.305591  260138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:00.305635  260138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:00.320293  260138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I0729 12:40:00.320778  260138 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:00.321298  260138 main.go:141] libmachine: Using API Version  1
	I0729 12:40:00.321320  260138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:00.321624  260138 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:00.321831  260138 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:00.322051  260138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:40:00.322090  260138 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:00.324979  260138 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:00.325408  260138 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:00.325435  260138 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:00.325577  260138 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:00.325763  260138 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:00.325962  260138 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:00.326118  260138 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:40:00.413956  260138 ssh_runner.go:195] Run: systemctl --version
	I0729 12:40:00.424157  260138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:40:00.441826  260138 kubeconfig.go:125] found "ha-767488" server: "https://192.168.39.254:8443"
	I0729 12:40:00.441859  260138 api_server.go:166] Checking apiserver status ...
	I0729 12:40:00.441905  260138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:40:00.457662  260138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5983/cgroup
	W0729 12:40:00.467854  260138 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5983/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 12:40:00.467926  260138 ssh_runner.go:195] Run: ls
	I0729 12:40:00.472496  260138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 12:40:03.232716  260138 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:40:03.232839  260138 retry.go:31] will retry after 202.12301ms: state is "Stopped"
	I0729 12:40:03.435194  260138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 12:40:06.305843  260138 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:40:06.305898  260138 retry.go:31] will retry after 327.928803ms: state is "Stopped"
	I0729 12:40:06.634497  260138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 12:40:09.376944  260138 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:40:09.376994  260138 retry.go:31] will retry after 339.066056ms: state is "Stopped"
	I0729 12:40:09.716464  260138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 12:40:12.445158  260138 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:40:12.445207  260138 retry.go:31] will retry after 587.920118ms: state is "Stopped"
	I0729 12:40:13.033999  260138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 12:40:15.517186  260138 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:40:15.517245  260138 status.go:422] ha-767488 apiserver status = Running (err=<nil>)
	I0729 12:40:15.517255  260138 status.go:257] ha-767488 status: &{Name:ha-767488 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:40:15.517278  260138 status.go:255] checking status of ha-767488-m02 ...
	I0729 12:40:15.517723  260138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:15.517778  260138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:15.532470  260138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41777
	I0729 12:40:15.533011  260138 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:15.533511  260138 main.go:141] libmachine: Using API Version  1
	I0729 12:40:15.533533  260138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:15.533858  260138 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:15.534064  260138 main.go:141] libmachine: (ha-767488-m02) Calling .GetState
	I0729 12:40:15.535523  260138 status.go:330] ha-767488-m02 host status = "Running" (err=<nil>)
	I0729 12:40:15.535569  260138 host.go:66] Checking if "ha-767488-m02" exists ...
	I0729 12:40:15.535885  260138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:15.535921  260138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:15.550774  260138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0729 12:40:15.551292  260138 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:15.551809  260138 main.go:141] libmachine: Using API Version  1
	I0729 12:40:15.551832  260138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:15.552195  260138 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:15.552391  260138 main.go:141] libmachine: (ha-767488-m02) Calling .GetIP
	I0729 12:40:15.555411  260138 main.go:141] libmachine: (ha-767488-m02) DBG | domain ha-767488-m02 has defined MAC address 52:54:00:2e:48:8f in network mk-ha-767488
	I0729 12:40:15.555856  260138 main.go:141] libmachine: (ha-767488-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:48:8f", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:25:48 +0000 UTC Type:0 Mac:52:54:00:2e:48:8f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-767488-m02 Clientid:01:52:54:00:2e:48:8f}
	I0729 12:40:15.555887  260138 main.go:141] libmachine: (ha-767488-m02) DBG | domain ha-767488-m02 has defined IP address 192.168.39.45 and MAC address 52:54:00:2e:48:8f in network mk-ha-767488
	I0729 12:40:15.556048  260138 host.go:66] Checking if "ha-767488-m02" exists ...
	I0729 12:40:15.556371  260138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:15.556408  260138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:15.571145  260138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0729 12:40:15.571560  260138 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:15.572032  260138 main.go:141] libmachine: Using API Version  1
	I0729 12:40:15.572060  260138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:15.572410  260138 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:15.572575  260138 main.go:141] libmachine: (ha-767488-m02) Calling .DriverName
	I0729 12:40:15.572774  260138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:40:15.572811  260138 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHHostname
	I0729 12:40:15.575356  260138 main.go:141] libmachine: (ha-767488-m02) DBG | domain ha-767488-m02 has defined MAC address 52:54:00:2e:48:8f in network mk-ha-767488
	I0729 12:40:15.575863  260138 main.go:141] libmachine: (ha-767488-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:48:8f", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:25:48 +0000 UTC Type:0 Mac:52:54:00:2e:48:8f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-767488-m02 Clientid:01:52:54:00:2e:48:8f}
	I0729 12:40:15.575891  260138 main.go:141] libmachine: (ha-767488-m02) DBG | domain ha-767488-m02 has defined IP address 192.168.39.45 and MAC address 52:54:00:2e:48:8f in network mk-ha-767488
	I0729 12:40:15.576127  260138 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHPort
	I0729 12:40:15.576303  260138 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHKeyPath
	I0729 12:40:15.576455  260138 main.go:141] libmachine: (ha-767488-m02) Calling .GetSSHUsername
	I0729 12:40:15.576596  260138 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488-m02/id_rsa Username:docker}
	W0729 12:40:34.049006  260138 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.45:22: connect: no route to host
	W0729 12:40:34.049137  260138 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host
	E0729 12:40:34.049167  260138 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host
	I0729 12:40:34.049176  260138 status.go:257] ha-767488-m02 status: &{Name:ha-767488-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 12:40:34.049196  260138 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host
	I0729 12:40:34.049210  260138 status.go:255] checking status of ha-767488-m03 ...
	I0729 12:40:34.049637  260138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:34.049700  260138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:34.064877  260138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I0729 12:40:34.065274  260138 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:34.065729  260138 main.go:141] libmachine: Using API Version  1
	I0729 12:40:34.065750  260138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:34.066039  260138 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:34.066239  260138 main.go:141] libmachine: (ha-767488-m03) Calling .GetState
	I0729 12:40:34.067663  260138 status.go:330] ha-767488-m03 host status = "Stopped" (err=<nil>)
	I0729 12:40:34.067675  260138 status.go:343] host is not running, skipping remaining checks
	I0729 12:40:34.067680  260138 status.go:257] ha-767488-m03 status: &{Name:ha-767488-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:40:34.067704  260138 status.go:255] checking status of ha-767488-m04 ...
	I0729 12:40:34.068010  260138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:34.068048  260138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:34.082121  260138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41697
	I0729 12:40:34.082483  260138 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:34.082901  260138 main.go:141] libmachine: Using API Version  1
	I0729 12:40:34.082918  260138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:34.083239  260138 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:34.083431  260138 main.go:141] libmachine: (ha-767488-m04) Calling .GetState
	I0729 12:40:34.084719  260138 status.go:330] ha-767488-m04 host status = "Stopped" (err=<nil>)
	I0729 12:40:34.084736  260138 status.go:343] host is not running, skipping remaining checks
	I0729 12:40:34.084742  260138 status.go:257] ha-767488-m04 status: &{Name:ha-767488-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-767488-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:546: status says there are running hosts: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-767488-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-767488-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-767488 -n ha-767488
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-767488 -n ha-767488: exit status 2 (15.575418202s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 logs -n 25
E0729 12:40:50.926396  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 logs -n 25: (1.357822216s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m04 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp testdata/cp-test.txt                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488:/home/docker/cp-test_ha-767488-m04_ha-767488.txt                       |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488 sudo cat                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488.txt                                 |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m02:/home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03:/home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m03 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-767488 node stop m02 -v=7                                                     | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-767488 node start m02 -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488 -v=7                                                           | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-767488 -v=7                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	| node    | ha-767488 node delete m03 -v=7                                                   | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-767488 stop -v=7                                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:28:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:28:29.213184  257176 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:28:29.213435  257176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:28:29.213444  257176 out.go:304] Setting ErrFile to fd 2...
	I0729 12:28:29.213448  257176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:28:29.213604  257176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:28:29.214122  257176 out.go:298] Setting JSON to false
	I0729 12:28:29.215063  257176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7852,"bootTime":1722248257,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:28:29.215118  257176 start.go:139] virtualization: kvm guest
	I0729 12:28:29.217142  257176 out.go:177] * [ha-767488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:28:29.218351  257176 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:28:29.218358  257176 notify.go:220] Checking for updates...
	I0729 12:28:29.220405  257176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:28:29.221684  257176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:28:29.222900  257176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:28:29.224025  257176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:28:29.225157  257176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:28:29.226709  257176 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:28:29.226808  257176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:28:29.227211  257176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:28:29.227254  257176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:28:29.242929  257176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0729 12:28:29.243340  257176 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:28:29.243859  257176 main.go:141] libmachine: Using API Version  1
	I0729 12:28:29.243878  257176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:28:29.244194  257176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:28:29.244404  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.277920  257176 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:28:29.279142  257176 start.go:297] selected driver: kvm2
	I0729 12:28:29.279164  257176 start.go:901] validating driver "kvm2" against &{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:28:29.279323  257176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:28:29.279655  257176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:28:29.279742  257176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:28:29.294785  257176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:28:29.295450  257176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:28:29.295597  257176 cni.go:84] Creating CNI manager for ""
	I0729 12:28:29.295609  257176 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:28:29.295668  257176 start.go:340] cluster config:
	{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:28:29.295787  257176 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:28:29.297555  257176 out.go:177] * Starting "ha-767488" primary control-plane node in "ha-767488" cluster
	I0729 12:28:29.298735  257176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:28:29.298761  257176 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:28:29.298770  257176 cache.go:56] Caching tarball of preloaded images
	I0729 12:28:29.298837  257176 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:28:29.298847  257176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:28:29.298958  257176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:28:29.299164  257176 start.go:360] acquireMachinesLock for ha-767488: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:28:29.299217  257176 start.go:364] duration metric: took 29.143µs to acquireMachinesLock for "ha-767488"
	I0729 12:28:29.299236  257176 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:28:29.299241  257176 fix.go:54] fixHost starting: 
	I0729 12:28:29.299513  257176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:28:29.299545  257176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:28:29.313514  257176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
	I0729 12:28:29.313936  257176 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:28:29.314395  257176 main.go:141] libmachine: Using API Version  1
	I0729 12:28:29.314416  257176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:28:29.314828  257176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:28:29.315041  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.315199  257176 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:28:29.316538  257176 fix.go:112] recreateIfNeeded on ha-767488: state=Running err=<nil>
	W0729 12:28:29.316562  257176 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:28:29.318256  257176 out.go:177] * Updating the running kvm2 "ha-767488" VM ...
	I0729 12:28:29.319254  257176 machine.go:94] provisionDockerMachine start ...
	I0729 12:28:29.319272  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:28:29.319461  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.321717  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.322169  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.322198  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.322326  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.322496  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.322637  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.322767  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.322944  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.323131  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.323141  257176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:28:29.438235  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:28:29.438263  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.438523  257176 buildroot.go:166] provisioning hostname "ha-767488"
	I0729 12:28:29.438557  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.438793  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.441520  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.441975  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.442000  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.442119  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.442319  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.442466  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.442624  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.442834  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.443017  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.443028  257176 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767488 && echo "ha-767488" | sudo tee /etc/hostname
	I0729 12:28:29.574562  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:28:29.574598  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.577319  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.577768  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.577796  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.577984  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.578163  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.578349  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.578522  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.578697  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:29.578860  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:29.578875  257176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767488/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:28:29.694293  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:28:29.694324  257176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:28:29.694371  257176 buildroot.go:174] setting up certificates
	I0729 12:28:29.694382  257176 provision.go:84] configureAuth start
	I0729 12:28:29.694404  257176 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:28:29.694702  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:28:29.697510  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.697893  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.697924  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.698075  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.700392  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.700707  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.700736  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.700956  257176 provision.go:143] copyHostCerts
	I0729 12:28:29.700988  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:28:29.701018  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 12:28:29.701026  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:28:29.701092  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:28:29.701180  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:28:29.701196  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 12:28:29.701203  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:28:29.701232  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:28:29.701337  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:28:29.701356  257176 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 12:28:29.701363  257176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:28:29.701386  257176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:28:29.701443  257176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.ha-767488 san=[127.0.0.1 192.168.39.217 ha-767488 localhost minikube]
	I0729 12:28:29.865634  257176 provision.go:177] copyRemoteCerts
	I0729 12:28:29.865706  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:28:29.865737  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:29.868239  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.868633  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:29.868668  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:29.868894  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:29.869091  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:29.869258  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:29.869404  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:28:29.954969  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:28:29.955070  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 12:28:29.983588  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:28:29.983664  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:28:30.008507  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:28:30.008564  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 12:28:30.033341  257176 provision.go:87] duration metric: took 338.942174ms to configureAuth
	I0729 12:28:30.033370  257176 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:28:30.033650  257176 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:28:30.033738  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:28:30.036595  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:30.037005  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:28:30.037034  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:28:30.037194  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:28:30.037406  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:30.037590  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:28:30.037757  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:28:30.037917  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:28:30.038088  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:28:30.038102  257176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:30:00.889607  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:30:00.889647  257176 machine.go:97] duration metric: took 1m31.570380134s to provisionDockerMachine
	I0729 12:30:00.889661  257176 start.go:293] postStartSetup for "ha-767488" (driver="kvm2")
	I0729 12:30:00.889671  257176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:30:00.889688  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:00.890061  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:30:00.890101  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:00.893255  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:00.893756  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:00.893776  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:00.893964  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:00.894195  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:00.894355  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:00.894488  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:00.985670  257176 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:30:00.990118  257176 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:30:00.990148  257176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:30:00.990216  257176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:30:00.990282  257176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 12:30:00.990293  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 12:30:00.990393  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:30:01.000194  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:30:01.026191  257176 start.go:296] duration metric: took 136.51077ms for postStartSetup
	I0729 12:30:01.026247  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.026593  257176 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 12:30:01.026621  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.029199  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.029572  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.029595  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.029738  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.029944  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.030081  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.030227  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	W0729 12:30:01.115131  257176 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 12:30:01.115161  257176 fix.go:56] duration metric: took 1m31.815919439s for fixHost
	I0729 12:30:01.115184  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.117586  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.117880  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.117908  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.118141  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.118375  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.118566  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.118718  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.118901  257176 main.go:141] libmachine: Using SSH client type: native
	I0729 12:30:01.119139  257176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:30:01.119158  257176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:30:01.229703  257176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722256201.208888269
	
	I0729 12:30:01.229730  257176 fix.go:216] guest clock: 1722256201.208888269
	I0729 12:30:01.229740  257176 fix.go:229] Guest: 2024-07-29 12:30:01.208888269 +0000 UTC Remote: 2024-07-29 12:30:01.115168505 +0000 UTC m=+91.939593395 (delta=93.719764ms)
	I0729 12:30:01.229788  257176 fix.go:200] guest clock delta is within tolerance: 93.719764ms
	I0729 12:30:01.229811  257176 start.go:83] releasing machines lock for "ha-767488", held for 1m31.930567231s
	I0729 12:30:01.229843  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.230107  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:30:01.232737  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.233111  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.233145  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.233363  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.233889  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.234111  257176 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:30:01.234230  257176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:30:01.234695  257176 ssh_runner.go:195] Run: cat /version.json
	I0729 12:30:01.234732  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.234779  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:30:01.238055  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238191  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238449  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.238476  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238583  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.238695  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:01.238714  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:01.238744  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.238859  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:30:01.238932  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.239053  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:30:01.239125  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:01.239217  257176 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:30:01.239383  257176 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:30:01.342923  257176 ssh_runner.go:195] Run: systemctl --version
	I0729 12:30:01.349719  257176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:30:01.510709  257176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:30:01.520723  257176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:30:01.520829  257176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:30:01.530564  257176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:30:01.530598  257176 start.go:495] detecting cgroup driver to use...
	I0729 12:30:01.530671  257176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:30:01.547174  257176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:30:01.561910  257176 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:30:01.561979  257176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:30:01.585740  257176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:30:01.618564  257176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:30:01.783506  257176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:30:01.940620  257176 docker.go:233] disabling docker service ...
	I0729 12:30:01.940698  257176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:30:01.959815  257176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:30:01.974713  257176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:30:02.128949  257176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:30:02.297303  257176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:30:02.311979  257176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:30:02.332382  257176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:30:02.332459  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.344118  257176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:30:02.344185  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.355791  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.367033  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.377875  257176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:30:02.389970  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.401378  257176 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.413069  257176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:30:02.423934  257176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:30:02.433485  257176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:30:02.443209  257176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:30:02.597078  257176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:30:06.946792  257176 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.349677004s)
	I0729 12:30:06.946822  257176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:30:06.946866  257176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:30:06.951885  257176 start.go:563] Will wait 60s for crictl version
	I0729 12:30:06.951947  257176 ssh_runner.go:195] Run: which crictl
	I0729 12:30:06.955891  257176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:30:06.996933  257176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:30:06.997009  257176 ssh_runner.go:195] Run: crio --version
	I0729 12:30:07.029517  257176 ssh_runner.go:195] Run: crio --version
	I0729 12:30:07.067863  257176 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:30:07.069386  257176 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:30:07.072261  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:07.072653  257176 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:30:07.072677  257176 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:30:07.072963  257176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:30:07.077985  257176 kubeadm.go:883] updating cluster {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:30:07.078159  257176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:30:07.078210  257176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:30:07.131360  257176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:30:07.131380  257176 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:30:07.131434  257176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:30:07.166976  257176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:30:07.167006  257176 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:30:07.167019  257176 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0729 12:30:07.167163  257176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-767488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:30:07.167263  257176 ssh_runner.go:195] Run: crio config
	I0729 12:30:07.218394  257176 cni.go:84] Creating CNI manager for ""
	I0729 12:30:07.218416  257176 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:30:07.218425  257176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:30:07.218446  257176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-767488 NodeName:ha-767488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:30:07.218636  257176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-767488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:30:07.218660  257176 kube-vip.go:115] generating kube-vip config ...
	I0729 12:30:07.218715  257176 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 12:30:07.231281  257176 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 12:30:07.231382  257176 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 12:30:07.231469  257176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:30:07.241143  257176 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:30:07.241203  257176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 12:30:07.251296  257176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 12:30:07.268752  257176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:30:07.286269  257176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 12:30:07.306290  257176 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 12:30:07.325270  257176 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 12:30:07.330227  257176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:30:07.480445  257176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:30:07.495284  257176 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488 for IP: 192.168.39.217
	I0729 12:30:07.495312  257176 certs.go:194] generating shared ca certs ...
	I0729 12:30:07.495334  257176 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.495514  257176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:30:07.495585  257176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:30:07.495600  257176 certs.go:256] generating profile certs ...
	I0729 12:30:07.495692  257176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/client.key
	I0729 12:30:07.495719  257176 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293
	I0729 12:30:07.495734  257176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.45 192.168.39.210 192.168.39.254]
	I0729 12:30:07.554302  257176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 ...
	I0729 12:30:07.554335  257176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293: {Name:mkc55706e98723442a7209c78a851c6aeec63640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.554502  257176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293 ...
	I0729 12:30:07.554512  257176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293: {Name:mkd6b648aa8c639f0f8174c6258aa3c28a419e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:30:07.554579  257176 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt.310e5293 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt
	I0729 12:30:07.554733  257176 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key
	I0729 12:30:07.554863  257176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key
	I0729 12:30:07.554878  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:30:07.554890  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:30:07.554905  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:30:07.554917  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:30:07.554930  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:30:07.554942  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:30:07.554954  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:30:07.554966  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:30:07.555012  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 12:30:07.555038  257176 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 12:30:07.555053  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:30:07.555074  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:30:07.555094  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:30:07.555113  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:30:07.555149  257176 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:30:07.555175  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 12:30:07.555188  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:07.555200  257176 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 12:30:07.555742  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:30:07.581960  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:30:07.606534  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:30:07.651322  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:30:07.734079  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 12:30:07.843422  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:30:07.919383  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:30:08.009302  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:30:08.114819  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 12:30:08.177084  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:30:08.323565  257176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 12:30:08.418339  257176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:30:08.452890  257176 ssh_runner.go:195] Run: openssl version
	I0729 12:30:08.463083  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 12:30:08.481125  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.488340  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.488407  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 12:30:08.496532  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 12:30:08.512456  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 12:30:08.528227  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.535939  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.536020  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 12:30:08.542124  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:30:08.556827  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:30:08.570963  257176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.578024  257176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.578072  257176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:30:08.583957  257176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:30:08.599010  257176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:30:08.609458  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:30:08.622965  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:30:08.645142  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:30:08.661889  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:30:08.733013  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:30:08.752828  257176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:30:08.763265  257176 kubeadm.go:392] StartCluster: {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:30:08.763447  257176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:30:08.763516  257176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:30:08.826291  257176 cri.go:89] found id: "a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad"
	I0729 12:30:08.826316  257176 cri.go:89] found id: "5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887"
	I0729 12:30:08.826319  257176 cri.go:89] found id: "f39e050cd5cc4b05a81e93b2261e728d2c07bc7c1daa3162edfde11e82a4620c"
	I0729 12:30:08.826323  257176 cri.go:89] found id: "5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf"
	I0729 12:30:08.826325  257176 cri.go:89] found id: "a26ce9fba519a429ebcf1de5c892752783519e02720eee753c8f8e32ce942c27"
	I0729 12:30:08.826329  257176 cri.go:89] found id: "c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d"
	I0729 12:30:08.826331  257176 cri.go:89] found id: "ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0"
	I0729 12:30:08.826334  257176 cri.go:89] found id: "e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316"
	I0729 12:30:08.826336  257176 cri.go:89] found id: "a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1"
	I0729 12:30:08.826341  257176 cri.go:89] found id: "14bf682e420cb00f83e39a018ac3723f16ed71fccee45180d30073e87b224475"
	I0729 12:30:08.826343  257176 cri.go:89] found id: "f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb"
	I0729 12:30:08.826345  257176 cri.go:89] found id: "dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a"
	I0729 12:30:08.826348  257176 cri.go:89] found id: "d427719357ecf4267be49591ec19dedc4068db6bce8a0c9d8b8523551afbbe91"
	I0729 12:30:08.826351  257176 cri.go:89] found id: "70136b17c65dd39a4d8ff8ecf6e4c4229432e46ce9fcbae7271cb05229ee641d"
	I0729 12:30:08.826356  257176 cri.go:89] found id: ""
	I0729 12:30:08.826397  257176 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 12:40:49 ha-767488 crio[3370]: time="2024-07-29 12:40:49.987929522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256849987903340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2c2b384-f93b-4295-8687-9216888af7e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:40:49 ha-767488 crio[3370]: time="2024-07-29 12:40:49.988456679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57f16bf8-6d9a-407c-a615-3423f54f53e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:49 ha-767488 crio[3370]: time="2024-07-29 12:40:49.988539505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57f16bf8-6d9a-407c-a615-3423f54f53e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:49 ha-767488 crio[3370]: time="2024-07-29 12:40:49.989091353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256788402908315,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f541b63f34e8eeb46f9636fcd9f0442b732b33fe15a4bb1e996edfc3adf2fe8,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256703681869380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256691386321254,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-5
4d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash
: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash:
e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff
023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b2
9ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,St
ate:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1
722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57f16bf8-6d9a-407c-a615-3423f54f53e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.030111008Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb1e868a-53e1-4271-a9bb-2cf749516879 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.030198376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb1e868a-53e1-4271-a9bb-2cf749516879 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.032016565Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6772d8d8-1f47-437a-b3cf-9562e35c6c02 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.032461949Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256850032438389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6772d8d8-1f47-437a-b3cf-9562e35c6c02 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.033298265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fabceeec-62ed-49cc-8788-79fb89ec846d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.033366800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fabceeec-62ed-49cc-8788-79fb89ec846d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.033882121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256788402908315,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f541b63f34e8eeb46f9636fcd9f0442b732b33fe15a4bb1e996edfc3adf2fe8,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256703681869380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256691386321254,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-5
4d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash
: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash:
e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff
023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b2
9ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,St
ate:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1
722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fabceeec-62ed-49cc-8788-79fb89ec846d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.079521779Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb018219-329e-41f0-a0cc-251436281067 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.079611680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb018219-329e-41f0-a0cc-251436281067 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.080914484Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b98262e-f4f4-439d-992a-1acad1528555 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.081360927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256850081338789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b98262e-f4f4-439d-992a-1acad1528555 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.081894670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13f64cf0-3e84-485d-a990-2cab32de8e89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.081975719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13f64cf0-3e84-485d-a990-2cab32de8e89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.082389034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256788402908315,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f541b63f34e8eeb46f9636fcd9f0442b732b33fe15a4bb1e996edfc3adf2fe8,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256703681869380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256691386321254,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-5
4d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash
: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash:
e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff
023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b2
9ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,St
ate:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1
722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13f64cf0-3e84-485d-a990-2cab32de8e89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.121330815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=469a762b-49e3-416b-ba30-b82b47d73196 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.121416775Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=469a762b-49e3-416b-ba30-b82b47d73196 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.122284067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8aec7251-ed91-4d34-9f9a-a7baf77faaec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.122721524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722256850122697762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8aec7251-ed91-4d34-9f9a-a7baf77faaec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.123222574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=278c7ba7-f66f-4efc-8403-79610a2bf8a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.123298241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=278c7ba7-f66f-4efc-8403-79610a2bf8a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:40:50 ha-767488 crio[3370]: time="2024-07-29 12:40:50.123717656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000,PodSandboxId:874397bc99826914b5d4104daaa09b6568039fbe38f320074d718e9e5231724a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256788402908315,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f541b63f34e8eeb46f9636fcd9f0442b732b33fe15a4bb1e996edfc3adf2fe8,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722256703681869380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256691386321254,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722256331678302180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f,PodSandboxId:77ae4bca5cb19cbae8e243c715df237e40e03960b3c314ef076507413931e925,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722256280679108382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256251616346684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256242073096861,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256222380481967,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256219945284420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256216950254887,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256216943960404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00,PodSandboxId:3b6ba7ca06eb581a4a2a91629a46c4c77930af80bb6b919f03153b90c5444942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722256216894393005,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256208279024203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-5
4d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256208057512207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256207875551122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash
: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b136a6e0ea0ff40254de590d5eb223041dd892ac5bc205ac62a9dd35527bf4,PodSandboxId:93f5e8a8985f2bcf337dc5c606fdcbfee46078eee63d57872f53ab34af8f5407,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255857122316434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash:
e7252d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7b67549d5f7b5d4a48dcb35a226042443e40f3f27ec541099426a5169a5298,PodSandboxId:841baabfcb1b93cc7fc8c33988801c91b5d051b836170b41affddc0507b18294,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722255856270522003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff
023d68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d,PodSandboxId:c2f0a3db73b362b3266b5caa19ac3cffd664f240f61b22c3d063cb83e4cce418,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693537674611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0,PodSandboxId:721397f12db8cec57f6f86824f9ce14f3807feb4947ed24b360a0d03f8a965ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255693522237858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316,PodSandboxId:a4aeb6b1329f7526acfed679186e4bebf4bb469874f285b78cd630242dbe5f75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722255681558013368,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1,PodSandboxId:7ba65a0686e2068590195a26da6112d109bed3cc809731a48166e72571f0fe30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b2
9ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255676947759832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb,PodSandboxId:f96272e7bee5b3b13c5be3a81a75ecc0e753f7ae90d3e1b33abe3a8ffa9c806e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,St
ate:CONTAINER_EXITED,CreatedAt:1722255657468670906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a,PodSandboxId:489cc61ac2d598a9ae1520faedb3914ef9ddec66d964bd4c0f84e126df8544a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1
722255657440721400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=278c7ba7-f66f-4efc-8403-79610a2bf8a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c2ecec373e7fc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Exited              kube-apiserver            2                   874397bc99826       kube-apiserver-ha-767488
	6f541b63f34e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Running             storage-provisioner       2                   3b6ba7ca06eb5       storage-provisioner
	18d7603603557       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  1                   4ac1d50b066bb       kube-vip-ha-767488
	66eeaa3de5dde       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Running             kube-controller-manager   4                   77ae4bca5cb19       kube-controller-manager-ha-767488
	149dfcffe55a7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      9 minutes ago        Exited              kube-controller-manager   3                   77ae4bca5cb19       kube-controller-manager-ha-767488
	3f1e978a01d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      9 minutes ago        Running             busybox                   1                   6ff1b7f6ad731       busybox-fc5497c4f-4ppv4
	cbbea78e99e72       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      10 minutes ago       Running             busybox                   1                   a7dc5254878c7       busybox-fc5497c4f-trgfp
	7ffae0e726786       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      10 minutes ago       Exited              kube-vip                  0                   4ac1d50b066bb       kube-vip-ha-767488
	d899a73918641       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago       Running             coredns                   1                   464e80f1474da       coredns-7db6d8ff4d-k6r5l
	88ec5aa0ed7ec       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago       Running             kube-proxy                1                   4e921577c4923       kube-proxy-sqk96
	45379775c471b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago       Running             coredns                   1                   6fd6fea36e81f       coredns-7db6d8ff4d-qqt5t
	76b855b3ad75b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago       Exited              storage-provisioner       1                   3b6ba7ca06eb5       storage-provisioner
	a327747c60c54       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      10 minutes ago       Running             kindnet-cni               1                   ebff2bebd5529       kindnet-6x56p
	5e886bb5a4a2e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago       Running             kube-scheduler            1                   4d030101f0f82       kube-scheduler-ha-767488
	5c8cded716df9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago       Running             etcd                      1                   c38a2d43be153       etcd-ha-767488
	79b136a6e0ea0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   16 minutes ago       Exited              busybox                   0                   93f5e8a8985f2       busybox-fc5497c4f-trgfp
	3f7b67549d5f7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   16 minutes ago       Exited              busybox                   0                   841baabfcb1b9       busybox-fc5497c4f-4ppv4
	c263b16acab21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago       Exited              coredns                   0                   c2f0a3db73b36       coredns-7db6d8ff4d-k6r5l
	ed92faf8d1c93       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago       Exited              coredns                   0                   721397f12db8c       coredns-7db6d8ff4d-qqt5t
	e2114078a73c1       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    19 minutes ago       Exited              kindnet-cni               0                   a4aeb6b1329f7       kindnet-6x56p
	a99c50ffbfb28       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      19 minutes ago       Exited              kube-proxy                0                   7ba65a0686e20       kube-proxy-sqk96
	f1ea8fbc1b3ff       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      19 minutes ago       Exited              kube-scheduler            0                   f96272e7bee5b       kube-scheduler-ha-767488
	dab08a0e0f3c1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago       Exited              etcd                      0                   489cc61ac2d59       etcd-ha-767488
	
	
	==> coredns [45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722] <==
	[INFO] plugin/kubernetes: Trace[1839550499]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:16.213) (total time: 11495ms):
	Trace[1839550499]: ---"Objects listed" error:Unauthorized 11495ms (12:40:27.709)
	Trace[1839550499]: [11.495462776s] [11.495462776s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[841416442]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.890) (total time: 11819ms):
	Trace[841416442]: ---"Objects listed" error:Unauthorized 11819ms (12:40:27.709)
	Trace[841416442]: [11.819152896s] [11.819152896s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2022085669]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.047) (total time: 12661ms):
	Trace[2022085669]: ---"Objects listed" error:Unauthorized 12661ms (12:40:27.709)
	Trace[2022085669]: [12.66151731s] [12.66151731s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1130676405]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:32.086) (total time: 10721ms):
	Trace[1130676405]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 10720ms (12:40:42.807)
	Trace[1130676405]: [10.721021558s] [10.721021558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d] <==
	[INFO] 10.244.0.5:56893 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.006105477s
	[INFO] 10.244.2.2:49553 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000142409s
	[INFO] 10.244.0.4:44644 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190983s
	[INFO] 10.244.0.4:50509 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000060342s
	[INFO] 10.244.0.4:50667 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000235509s
	[INFO] 10.244.0.4:44600 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002136398s
	[INFO] 10.244.0.5:59842 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008887594s
	[INFO] 10.244.0.5:50358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114137s
	[INFO] 10.244.2.2:51452 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000291176s
	[INFO] 10.244.2.2:40431 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000264874s
	[INFO] 10.244.2.2:35432 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167749s
	[INFO] 10.244.0.4:53618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068812s
	[INFO] 10.244.0.4:52172 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001711814s
	[INFO] 10.244.0.4:47059 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131595s
	[INFO] 10.244.0.4:39902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147006s
	[INFO] 10.244.0.4:37624 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173908s
	[INFO] 10.244.0.5:52999 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135418s
	[INFO] 10.244.2.2:39192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096392s
	[INFO] 10.244.2.2:47682 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102837s
	[INFO] 10.244.0.4:43135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079564s
	[INFO] 10.244.0.4:54022 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000229955s
	[INFO] 10.244.0.4:49468 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000035685s
	[INFO] 10.244.0.4:56523 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000031196s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b] <==
	Trace[255378117]: ---"Objects listed" error:Unauthorized 11312ms (12:40:27.700)
	Trace[255378117]: [11.312525922s] [11.312525922s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1030282606]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.356) (total time: 12346ms):
	Trace[1030282606]: ---"Objects listed" error:Unauthorized 12346ms (12:40:27.702)
	Trace[1030282606]: [12.346347085s] [12.346347085s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[418228940]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.563) (total time: 12139ms):
	Trace[418228940]: ---"Objects listed" error:Unauthorized 12138ms (12:40:27.702)
	Trace[418228940]: [12.139191986s] [12.139191986s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[2011977158]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:31.350) (total time: 11455ms):
	Trace[2011977158]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 11455ms (12:40:42.805)
	Trace[2011977158]: [11.45543795s] [11.45543795s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3048": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3048": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[856661345]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:31.528) (total time: 11278ms):
	Trace[856661345]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 11278ms (12:40:42.807)
	Trace[856661345]: [11.278535864s] [11.278535864s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	
	
	==> coredns [ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0] <==
	[INFO] 10.244.2.2:52575 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121686s
	[INFO] 10.244.2.2:60306 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096516s
	[INFO] 10.244.2.2:56750 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001201569s
	[INFO] 10.244.0.4:53864 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076226s
	[INFO] 10.244.0.4:43895 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007603s
	[INFO] 10.244.0.4:49768 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001191618s
	[INFO] 10.244.0.4:36610 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091068s
	[INFO] 10.244.0.5:36533 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152157s
	[INFO] 10.244.0.5:59316 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006399s
	[INFO] 10.244.0.5:59406 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051375s
	[INFO] 10.244.0.5:56054 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054289s
	[INFO] 10.244.2.2:32902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000349565s
	[INFO] 10.244.2.2:56936 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214735s
	[INFO] 10.244.2.2:38037 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076517s
	[INFO] 10.244.2.2:33788 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066283s
	[INFO] 10.244.0.4:46469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080696s
	[INFO] 10.244.0.4:56376 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069276s
	[INFO] 10.244.0.4:41139 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003161s
	[INFO] 10.244.0.5:44822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123194s
	[INFO] 10.244.0.5:43997 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184384s
	[INFO] 10.244.0.5:34612 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094985s
	[INFO] 10.244.2.2:57694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131654s
	[INFO] 10.244.2.2:52834 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009944s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.059198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062192] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.161618] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.141343] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.277698] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.104100] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.675894] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060211] kauditd_printk_skb: 158 callbacks suppressed
	[Jul29 12:21] kauditd_printk_skb: 74 callbacks suppressed
	[  +3.541880] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[ +10.417395] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.584590] kauditd_printk_skb: 34 callbacks suppressed
	[Jul29 12:22] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 12:25] kauditd_printk_skb: 10 callbacks suppressed
	[Jul29 12:30] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +0.152481] systemd-fstab-generator[3296]: Ignoring "noauto" option for root device
	[  +0.201233] systemd-fstab-generator[3310]: Ignoring "noauto" option for root device
	[  +0.141805] systemd-fstab-generator[3322]: Ignoring "noauto" option for root device
	[  +0.319718] systemd-fstab-generator[3350]: Ignoring "noauto" option for root device
	[  +4.887179] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.088800] kauditd_printk_skb: 100 callbacks suppressed
	[  +9.359831] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.040803] kauditd_printk_skb: 30 callbacks suppressed
	[ +16.902160] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.806405] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf] <==
	{"level":"warn","ts":"2024-07-29T12:40:46.189056Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":17437252754861076687,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-29T12:40:46.690115Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":17437252754861076687,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-29T12:40:46.916826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:46.916937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:46.916971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:46.917004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd [logterm: 7, index: 3480] sent MsgPreVote request to 30f76e47e42605a5 at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:46.91703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd [logterm: 7, index: 3480] sent MsgPreVote request to d9000071a51f92ea at term 7"}
	{"level":"warn","ts":"2024-07-29T12:40:47.190906Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":17437252754861076687,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-29T12:40:47.69162Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":17437252754861076687,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-29T12:40:48.192757Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":17437252754861076687,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-29T12:40:48.317309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:48.31736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:48.317379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:48.317394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd [logterm: 7, index: 3480] sent MsgPreVote request to 30f76e47e42605a5 at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:48.317401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd [logterm: 7, index: 3480] sent MsgPreVote request to d9000071a51f92ea at term 7"}
	{"level":"warn","ts":"2024-07-29T12:40:48.646636Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"30f76e47e42605a5","rtt":"1.067657ms","error":"dial tcp 192.168.39.45:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-29T12:40:48.64673Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"30f76e47e42605a5","rtt":"9.457287ms","error":"dial tcp 192.168.39.45:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-29T12:40:48.681681Z","caller":"etcdserver/v3_server.go:909","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-07-29T12:40:48.77388Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-29T12:40:48.773964Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: i/o timeout"}
	{"level":"info","ts":"2024-07-29T12:40:49.717687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:49.717843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:49.717866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:49.717915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd [logterm: 7, index: 3480] sent MsgPreVote request to 30f76e47e42605a5 at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:49.717952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd [logterm: 7, index: 3480] sent MsgPreVote request to d9000071a51f92ea at term 7"}
	
	
	==> etcd [dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a] <==
	{"level":"info","ts":"2024-07-29T12:28:30.36201Z","caller":"etcdserver/server.go:1448","msg":"leadership transfer finished","local-member-id":"a09c9983ac28f1fd","old-leader-member-id":"a09c9983ac28f1fd","new-leader-member-id":"30f76e47e42605a5","took":"101.152061ms"}
	{"level":"info","ts":"2024-07-29T12:28:30.362301Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.362459Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362504Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.362589Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362622Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.362871Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363054Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363107Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"30f76e47e42605a5","error":"failed to read 30f76e47e42605a5 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T12:28:30.363176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"warn","ts":"2024-07-29T12:28:30.363422Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T12:28:30.363566Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.363616Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:28:30.363657Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.3637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.363751Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364622Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364716Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.364883Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:28:30.370716Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"warn","ts":"2024-07-29T12:28:30.370988Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.45:55490","server-name":"","error":"read tcp 192.168.39.217:2380->192.168.39.45:55490: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:28:30.371604Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.45:55480","server-name":"","error":"set tcp 192.168.39.217:2380: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T12:28:31.371639Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:28:31.37168Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> kernel <==
	 12:40:50 up 20 min,  0 users,  load average: 0.24, 0.51, 0.36
	Linux ha-767488 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad] <==
	I0729 12:40:29.352753       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:29.352910       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:29.352935       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:40:29.353062       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:29.353084       1 main.go:299] handling current node
	I0729 12:40:39.356408       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:39.356467       1 main.go:299] handling current node
	I0729 12:40:39.356485       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:40:39.356493       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:40:39.356693       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:40:39.356728       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:39.356862       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:39.356895       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	W0729 12:40:42.805541       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	E0729 12:40:42.805601       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	I0729 12:40:49.352084       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:49.352122       1 main.go:299] handling current node
	I0729 12:40:49.352136       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:40:49.352140       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:40:49.352268       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:40:49.352274       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:49.352317       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:49.352321       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	W0729 12:40:50.573724       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	E0729 12:40:50.573775       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> kindnet [e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316] <==
	I0729 12:27:52.596723       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:02.599761       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:02.599920       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:02.600088       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:02.600113       1 main.go:299] handling current node
	I0729 12:28:02.600135       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:02.600152       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:02.600239       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:02.600258       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:12.602416       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:12.602457       1 main.go:299] handling current node
	I0729 12:28:12.602474       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:12.602503       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:12.602642       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:12.602666       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:12.602727       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:12.602746       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:28:22.595743       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:28:22.595784       1 main.go:299] handling current node
	I0729 12:28:22.595836       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:28:22.595843       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:28:22.596051       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:28:22.596107       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:28:22.596246       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:28:22.596285       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000] <==
	Trace[236030586]: [12.99654407s] [12.99654407s] END
	E0729 12:40:41.690911       1 cacher.go:475] cacher (secrets): unexpected ListAndWatch error: failed to list *core.Secret: etcdserver: request timed out; reinitializing...
	I0729 12:40:41.690946       1 trace.go:236] Trace[1858479194]: "List(recursive=true) etcd3" audit-id:,key:/roles,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (29-Jul-2024 12:40:28.692) (total time: 12998ms):
	Trace[1858479194]: [12.998456973s] [12.998456973s] END
	W0729 12:40:41.690980       1 reflector.go:547] storage/cacher.go:/roles: failed to list *rbac.Role: etcdserver: request timed out
	I0729 12:40:41.691012       1 trace.go:236] Trace[2143803604]: "Reflector ListAndWatch" name:storage/cacher.go:/roles (29-Jul-2024 12:40:28.692) (total time: 12998ms):
	Trace[2143803604]: ---"Objects listed" error:etcdserver: request timed out 12998ms (12:40:41.690)
	Trace[2143803604]: [12.998581597s] [12.998581597s] END
	E0729 12:40:41.691036       1 storage_rbac.go:187] unable to initialize clusterroles: etcdserver: request timed out
	F0729 12:40:41.691087       1 hooks.go:203] PostStartHook "rbac/bootstrap-roles" failed: unable to initialize roles: timed out waiting for the condition
	E0729 12:40:41.691041       1 cacher.go:475] cacher (roles.rbac.authorization.k8s.io): unexpected ListAndWatch error: failed to list *rbac.Role: etcdserver: request timed out; reinitializing...
	I0729 12:40:41.690385       1 trace.go:236] Trace[1496530107]: "List(recursive=true) etcd3" audit-id:,key:/mutatingwebhookconfigurations,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (29-Jul-2024 12:40:28.690) (total time: 12999ms):
	Trace[1496530107]: [12.999525569s] [12.999525569s] END
	W0729 12:40:41.785516       1 reflector.go:547] storage/cacher.go:/mutatingwebhookconfigurations: failed to list *admissionregistration.MutatingWebhookConfiguration: etcdserver: request timed out
	I0729 12:40:41.785552       1 trace.go:236] Trace[1150970231]: "Reflector ListAndWatch" name:storage/cacher.go:/mutatingwebhookconfigurations (29-Jul-2024 12:40:28.690) (total time: 13094ms):
	Trace[1150970231]: ---"Objects listed" error:etcdserver: request timed out 13094ms (12:40:41.785)
	Trace[1150970231]: [13.094772891s] [13.094772891s] END
	E0729 12:40:41.785559       1 cacher.go:475] cacher (mutatingwebhookconfigurations.admissionregistration.k8s.io): unexpected ListAndWatch error: failed to list *admissionregistration.MutatingWebhookConfiguration: etcdserver: request timed out; reinitializing...
	I0729 12:40:41.690472       1 trace.go:236] Trace[1892242206]: "List(recursive=true) etcd3" audit-id:,key:/validatingadmissionpolicybindings,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (29-Jul-2024 12:40:28.690) (total time: 12999ms):
	Trace[1892242206]: [12.999569291s] [12.999569291s] END
	W0729 12:40:41.785582       1 reflector.go:547] storage/cacher.go:/validatingadmissionpolicybindings: failed to list *admissionregistration.ValidatingAdmissionPolicyBinding: etcdserver: request timed out
	I0729 12:40:41.785618       1 trace.go:236] Trace[1570995448]: "Reflector ListAndWatch" name:storage/cacher.go:/validatingadmissionpolicybindings (29-Jul-2024 12:40:28.690) (total time: 13094ms):
	Trace[1570995448]: ---"Objects listed" error:etcdserver: request timed out 13094ms (12:40:41.785)
	Trace[1570995448]: [13.094728646s] [13.094728646s] END
	E0729 12:40:41.785624       1 cacher.go:475] cacher (validatingadmissionpolicybindings.admissionregistration.k8s.io): unexpected ListAndWatch error: failed to list *admissionregistration.ValidatingAdmissionPolicyBinding: etcdserver: request timed out; reinitializing...
	
	
	==> kube-controller-manager [149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f] <==
	I0729 12:31:21.148099       1 serving.go:380] Generated self-signed cert in-memory
	I0729 12:31:21.413326       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 12:31:21.413367       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:31:21.414936       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:31:21.415020       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 12:31:21.415139       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 12:31:21.415338       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0729 12:31:31.426553       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b] <==
	W0729 12:40:38.814081       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0729 12:40:39.773067       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "generic-garbage-collector" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	I0729 12:40:39.773166       1 garbagecollector.go:828] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.39.217:8443/api\": failed to get token for kube-system/generic-garbage-collector: timed out waiting for the condition"
	W0729 12:40:39.773712       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "resourcequota-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0729 12:40:39.773853       1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.39.217:8443/api": failed to get token for kube-system/resourcequota-controller: timed out waiting for the condition
	W0729 12:40:40.816377       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0729 12:40:40.816469       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.39.217:8443/api/v1/nodes/ha-767488/status\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node="ha-767488"
	W0729 12:40:40.817506       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0729 12:40:41.319660       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0729 12:40:42.320343       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:40:44.321104       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:40:44.321177       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-767488"
	E0729 12:40:44.321191       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.217:8443/api/v1/nodes/ha-767488\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0729 12:40:44.321535       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:40:44.822602       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:40:45.823777       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:40:46.224322       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=3049": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:40:46.224374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=3049": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:40:47.825202       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:40:47.825284       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.39.217:8443/api/v1/nodes/ha-767488-m02/status\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node="ha-767488-m02"
	W0729 12:40:47.825531       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:40:48.326264       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:40:49.108613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ConfigMap: Get "https://192.168.39.217:8443/api/v1/configmaps?resourceVersion=3049": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:40:49.108763       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.217:8443/api/v1/configmaps?resourceVersion=3049": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:40:49.327716       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.217:8443: connect: connection refused
	
	
	==> kube-proxy [88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770] <==
	E0729 12:38:58.686582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.830714       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.830972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.831090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.831124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.831182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.831211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:14.047354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:14.048013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:14.047870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:14.048111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:17.119592       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:17.119666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:35.551618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:35.551700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:35.551992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:35.552162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:41.695764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:41.696014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:06.272423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:06.272868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:21.631308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:21.631558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:24.703209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:24.703408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1] <==
	I0729 12:21:17.255773       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:21:17.286934       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	I0729 12:21:17.337677       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:21:17.337727       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:21:17.337746       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:21:17.340517       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:21:17.340710       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:21:17.340741       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:21:17.342294       1 config.go:192] "Starting service config controller"
	I0729 12:21:17.342534       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:21:17.342581       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:21:17.342586       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:21:17.343590       1 config.go:319] "Starting node config controller"
	I0729 12:21:17.343624       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:21:17.443485       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 12:21:17.443697       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:21:17.443586       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887] <==
	E0729 12:40:21.230397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 12:40:22.243325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 12:40:22.243419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 12:40:23.541590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 12:40:23.541652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 12:40:24.030114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:24.030218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:24.827144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:24.827194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:25.963020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:40:25.963127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 12:40:27.525553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:40:27.525717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:40:31.457216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:40:31.457249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:40:31.946204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:31.946255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:31.987696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:40:31.987742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:40:32.539286       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:40:32.539318       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:40:33.993576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:40:33.993629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:40:34.509160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:40:34.509295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	
	
	==> kube-scheduler [f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb] <==
	E0729 12:21:00.957258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:00.966559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:21:00.966602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:21:00.969971       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:21:00.970006       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:21:00.975481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:21:00.975514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:21:00.991207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:21:00.991302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:21:01.043730       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:21:01.043771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:21:01.201334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:21:01.201433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:01.269111       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:21:01.269202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 12:21:01.308519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:21:01.308567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:21:01.484192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:21:01.484242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:21:01.488207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 12:21:01.488410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 12:21:03.597444       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 12:24:50.794520       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bgb2n\": pod kindnet-bgb2n is already assigned to node \"ha-767488-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bgb2n" node="ha-767488-m04"
	E0729 12:24:50.794710       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bgb2n\": pod kindnet-bgb2n is already assigned to node \"ha-767488-m04\"" pod="kube-system/kindnet-bgb2n"
	E0729 12:28:30.163371       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 12:40:30 ha-767488 kubelet[1381]: E0729 12:40:30.846211    1381 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-767488\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-767488?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:30 ha-767488 kubelet[1381]: I0729 12:40:30.846215    1381 status_manager.go:853] "Failed to get status for pod" podUID="b1d029e38f53e06a3c7b5c185fd64a06" pod="kube-system/kube-apiserver-ha-767488" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767488\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:33 ha-767488 kubelet[1381]: E0729 12:40:33.918165    1381 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-767488.17e6af54985d8c26  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-767488,UID:b1d029e38f53e06a3c7b5c185fd64a06,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-767488,},FirstTimestamp:2024-07-29 12:38:05.38417463 +0000 UTC m=+1018.866520544,LastTimestamp:2024-07-29 12:38:05.38417463 +0000 UTC m=+1018.866520544,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Relate
d:nil,ReportingController:kubelet,ReportingInstance:ha-767488,}"
	Jul 29 12:40:33 ha-767488 kubelet[1381]: E0729 12:40:33.918330    1381 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-767488\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-767488?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:33 ha-767488 kubelet[1381]: E0729 12:40:33.918763    1381 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 29 12:40:33 ha-767488 kubelet[1381]: I0729 12:40:33.918534    1381 status_manager.go:853] "Failed to get status for pod" podUID="db6837dfa9a0fa8b28ce8897488c95e3" pod="kube-system/kube-vip-ha-767488" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-767488\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:36 ha-767488 kubelet[1381]: E0729 12:40:36.990291    1381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-767488?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 29 12:40:36 ha-767488 kubelet[1381]: W0729 12:40:36.990308    1381 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=3049": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 12:40:36 ha-767488 kubelet[1381]: E0729 12:40:36.990787    1381 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=3049": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 12:40:36 ha-767488 kubelet[1381]: I0729 12:40:36.990409    1381 status_manager.go:853] "Failed to get status for pod" podUID="db6837dfa9a0fa8b28ce8897488c95e3" pod="kube-system/kube-vip-ha-767488" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-767488\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:40 ha-767488 kubelet[1381]: I0729 12:40:40.062329    1381 status_manager.go:853] "Failed to get status for pod" podUID="b1d029e38f53e06a3c7b5c185fd64a06" pod="kube-system/kube-apiserver-ha-767488" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767488\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:42 ha-767488 kubelet[1381]: I0729 12:40:42.254986    1381 scope.go:117] "RemoveContainer" containerID="547d6699a30a2745531298a1b9ed1046a30f29e6cccb1ed5617c52f9f70078b3"
	Jul 29 12:40:42 ha-767488 kubelet[1381]: I0729 12:40:42.255342    1381 scope.go:117] "RemoveContainer" containerID="c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000"
	Jul 29 12:40:42 ha-767488 kubelet[1381]: E0729 12:40:42.255720    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-767488_kube-system(b1d029e38f53e06a3c7b5c185fd64a06)\"" pod="kube-system/kube-apiserver-ha-767488" podUID="b1d029e38f53e06a3c7b5c185fd64a06"
	Jul 29 12:40:43 ha-767488 kubelet[1381]: I0729 12:40:43.134242    1381 status_manager.go:853] "Failed to get status for pod" podUID="baafb1c5-8785-44de-ba07-d858ba337fce" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:46 ha-767488 kubelet[1381]: E0729 12:40:46.206287    1381 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-767488\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-767488?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:46 ha-767488 kubelet[1381]: E0729 12:40:46.206870    1381 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-767488.17e6af54985d8c26  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-767488,UID:b1d029e38f53e06a3c7b5c185fd64a06,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-767488,},FirstTimestamp:2024-07-29 12:38:05.38417463 +0000 UTC m=+1018.866520544,LastTimestamp:2024-07-29 12:38:05.38417463 +0000 UTC m=+1018.866520544,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Relate
d:nil,ReportingController:kubelet,ReportingInstance:ha-767488,}"
	Jul 29 12:40:46 ha-767488 kubelet[1381]: I0729 12:40:46.206743    1381 status_manager.go:853] "Failed to get status for pod" podUID="baafb1c5-8785-44de-ba07-d858ba337fce" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:46 ha-767488 kubelet[1381]: E0729 12:40:46.206589    1381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-767488?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 29 12:40:46 ha-767488 kubelet[1381]: I0729 12:40:46.299178    1381 scope.go:117] "RemoveContainer" containerID="c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000"
	Jul 29 12:40:46 ha-767488 kubelet[1381]: E0729 12:40:46.299644    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-767488_kube-system(b1d029e38f53e06a3c7b5c185fd64a06)\"" pod="kube-system/kube-apiserver-ha-767488" podUID="b1d029e38f53e06a3c7b5c185fd64a06"
	Jul 29 12:40:49 ha-767488 kubelet[1381]: E0729 12:40:49.278199    1381 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-767488\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-767488?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:49 ha-767488 kubelet[1381]: I0729 12:40:49.278202    1381 status_manager.go:853] "Failed to get status for pod" podUID="db6837dfa9a0fa8b28ce8897488c95e3" pod="kube-system/kube-vip-ha-767488" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-767488\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 12:40:50 ha-767488 kubelet[1381]: I0729 12:40:50.598745    1381 scope.go:117] "RemoveContainer" containerID="c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000"
	Jul 29 12:40:50 ha-767488 kubelet[1381]: E0729 12:40:50.599221    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-767488_kube-system(b1d029e38f53e06a3c7b5c185fd64a06)\"" pod="kube-system/kube-apiserver-ha-767488" podUID="b1d029e38f53e06a3c7b5c185fd64a06"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:40:49.729062  260402 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19341-233093/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-767488 -n ha-767488
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-767488 -n ha-767488: exit status 2 (221.364665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-767488" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (174.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (459.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-767488 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 12:42:18.313435  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:44:27.881152  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:47:18.313288  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-767488 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m36.190180639s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr
ha_test.go:571: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:574: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:577: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:580: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-767488 -n ha-767488
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 logs -n 25: (1.897926622s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m04 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp testdata/cp-test.txt                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488:/home/docker/cp-test_ha-767488-m04_ha-767488.txt                       |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488 sudo cat                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488.txt                                 |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m02:/home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03:/home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m03 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-767488 node stop m02 -v=7                                                     | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-767488 node start m02 -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488 -v=7                                                           | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-767488 -v=7                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	| node    | ha-767488 node delete m03 -v=7                                                   | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-767488 stop -v=7                                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true                                                         | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:40 UTC | 29 Jul 24 12:48 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:40:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:40:51.329866  260472 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:40:51.329974  260472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:40:51.329984  260472 out.go:304] Setting ErrFile to fd 2...
	I0729 12:40:51.329990  260472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:40:51.330183  260472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:40:51.330779  260472 out.go:298] Setting JSON to false
	I0729 12:40:51.331755  260472 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8594,"bootTime":1722248257,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:40:51.331823  260472 start.go:139] virtualization: kvm guest
	I0729 12:40:51.334313  260472 out.go:177] * [ha-767488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:40:51.335770  260472 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:40:51.335784  260472 notify.go:220] Checking for updates...
	I0729 12:40:51.338199  260472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:40:51.339561  260472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:40:51.340932  260472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:40:51.342165  260472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:40:51.343840  260472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:40:51.345700  260472 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:40:51.346109  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.346170  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.362742  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I0729 12:40:51.363165  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.363711  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.363735  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.364108  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.364327  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.364586  260472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:40:51.365000  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.365043  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.379978  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42067
	I0729 12:40:51.380389  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.380778  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.380814  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.381158  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.381323  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.415931  260472 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:40:51.417174  260472 start.go:297] selected driver: kvm2
	I0729 12:40:51.417189  260472 start.go:901] validating driver "kvm2" against &{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:40:51.417335  260472 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:40:51.417664  260472 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:40:51.417770  260472 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:40:51.432545  260472 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:40:51.433500  260472 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:40:51.433539  260472 cni.go:84] Creating CNI manager for ""
	I0729 12:40:51.433548  260472 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:40:51.433631  260472 start.go:340] cluster config:
	{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:40:51.433831  260472 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:40:51.435545  260472 out.go:177] * Starting "ha-767488" primary control-plane node in "ha-767488" cluster
	I0729 12:40:51.436699  260472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:40:51.436735  260472 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:40:51.436747  260472 cache.go:56] Caching tarball of preloaded images
	I0729 12:40:51.436866  260472 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:40:51.436877  260472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:40:51.437012  260472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:40:51.437194  260472 start.go:360] acquireMachinesLock for ha-767488: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:40:51.437233  260472 start.go:364] duration metric: took 21.45µs to acquireMachinesLock for "ha-767488"
	I0729 12:40:51.437247  260472 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:40:51.437253  260472 fix.go:54] fixHost starting: 
	I0729 12:40:51.437521  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.437552  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.451341  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0729 12:40:51.451741  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.452191  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.452220  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.452535  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.452723  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.452885  260472 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:40:51.454319  260472 fix.go:112] recreateIfNeeded on ha-767488: state=Running err=<nil>
	W0729 12:40:51.454350  260472 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:40:51.456154  260472 out.go:177] * Updating the running kvm2 "ha-767488" VM ...
	I0729 12:40:51.457351  260472 machine.go:94] provisionDockerMachine start ...
	I0729 12:40:51.457369  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.457584  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.459878  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.460266  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.460296  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.460395  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.460553  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.460704  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.460782  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.460935  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.461114  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.461124  260472 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:40:51.569205  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:40:51.569235  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.569499  260472 buildroot.go:166] provisioning hostname "ha-767488"
	I0729 12:40:51.569524  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.569693  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.572499  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.572988  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.573033  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.573160  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.573358  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.573548  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.573648  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.573898  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.574069  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.574089  260472 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767488 && echo "ha-767488" | sudo tee /etc/hostname
	I0729 12:40:51.701204  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:40:51.701229  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.703986  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.704423  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.704461  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.704639  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.704824  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.704975  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.705089  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.705288  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.705507  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.705531  260472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767488/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:40:51.817644  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:40:51.817684  260472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:40:51.817700  260472 buildroot.go:174] setting up certificates
	I0729 12:40:51.817709  260472 provision.go:84] configureAuth start
	I0729 12:40:51.817719  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.818054  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:40:51.820835  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.821225  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.821246  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.821413  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.823391  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.823759  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.823788  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.823928  260472 provision.go:143] copyHostCerts
	I0729 12:40:51.823969  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:40:51.824015  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 12:40:51.824028  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:40:51.824106  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:40:51.824213  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:40:51.824238  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 12:40:51.824248  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:40:51.824287  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:40:51.824345  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:40:51.824376  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 12:40:51.824384  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:40:51.824417  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:40:51.824477  260472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.ha-767488 san=[127.0.0.1 192.168.39.217 ha-767488 localhost minikube]
	I0729 12:40:52.006332  260472 provision.go:177] copyRemoteCerts
	I0729 12:40:52.006418  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:40:52.006452  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:52.009130  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.009520  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:52.009546  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.009704  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:52.009964  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.010156  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:52.010326  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:40:52.094644  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:40:52.094738  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:40:52.119444  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:40:52.119509  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 12:40:52.143660  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:40:52.143716  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:40:52.167324  260472 provision.go:87] duration metric: took 349.60091ms to configureAuth
	I0729 12:40:52.167355  260472 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:40:52.167557  260472 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:40:52.167627  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:52.170399  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.170750  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:52.170769  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.170976  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:52.171205  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.171383  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.171515  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:52.171707  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:52.171890  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:52.171904  260472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:42:30.662176  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:42:30.662209  260472 machine.go:97] duration metric: took 1m39.204842674s to provisionDockerMachine
	I0729 12:42:30.662225  260472 start.go:293] postStartSetup for "ha-767488" (driver="kvm2")
	I0729 12:42:30.662240  260472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:42:30.662263  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.662582  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:42:30.662612  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.665494  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.666063  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.666088  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.666235  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.666474  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.666633  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.666847  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:30.752735  260472 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:42:30.757792  260472 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:42:30.757820  260472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:42:30.757900  260472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:42:30.757994  260472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 12:42:30.758009  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 12:42:30.758096  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:42:30.768113  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:42:30.793284  260472 start.go:296] duration metric: took 131.040886ms for postStartSetup
	I0729 12:42:30.793328  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.793694  260472 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 12:42:30.793729  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.796515  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.796959  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.796985  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.797155  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.797360  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.797508  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.797632  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	W0729 12:42:30.883560  260472 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 12:42:30.883593  260472 fix.go:56] duration metric: took 1m39.446338951s for fixHost
	I0729 12:42:30.883619  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.886076  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.886458  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.886483  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.886633  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.886829  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.886996  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.887140  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.887303  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:42:30.887526  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:42:30.887541  260472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:42:30.997876  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722256950.957387407
	
	I0729 12:42:30.997906  260472 fix.go:216] guest clock: 1722256950.957387407
	I0729 12:42:30.997917  260472 fix.go:229] Guest: 2024-07-29 12:42:30.957387407 +0000 UTC Remote: 2024-07-29 12:42:30.883602483 +0000 UTC m=+99.589379345 (delta=73.784924ms)
	I0729 12:42:30.997948  260472 fix.go:200] guest clock delta is within tolerance: 73.784924ms
	I0729 12:42:30.997986  260472 start.go:83] releasing machines lock for "ha-767488", held for 1m39.560717836s
	I0729 12:42:30.998041  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.998327  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:42:31.000905  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.001304  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.001335  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.001531  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002184  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002392  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002499  260472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:42:31.002576  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:31.002622  260472 ssh_runner.go:195] Run: cat /version.json
	I0729 12:42:31.002652  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:31.005308  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005500  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005704  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.005737  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005887  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:31.006092  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:31.006208  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.006233  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.006272  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:31.006395  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:31.006459  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:31.006551  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:31.006697  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:31.006864  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:31.115893  260472 ssh_runner.go:195] Run: systemctl --version
	I0729 12:42:31.122469  260472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:42:31.297345  260472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:42:31.304517  260472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:42:31.304592  260472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:42:31.316445  260472 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:42:31.316475  260472 start.go:495] detecting cgroup driver to use...
	I0729 12:42:31.316547  260472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:42:31.333639  260472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:42:31.349241  260472 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:42:31.349303  260472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:42:31.364204  260472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:42:31.378300  260472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:42:31.534355  260472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:42:31.684660  260472 docker.go:233] disabling docker service ...
	I0729 12:42:31.684748  260472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:42:31.700676  260472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:42:31.715730  260472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:42:31.862044  260472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:42:32.012656  260472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:42:32.026627  260472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:42:32.048998  260472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:42:32.049086  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.060466  260472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:42:32.060565  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.071761  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.082721  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.094732  260472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:42:32.106637  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.117985  260472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.131937  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.142195  260472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:42:32.151406  260472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:42:32.160525  260472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:42:32.305601  260472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:42:40.307724  260472 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.002069181s)
	I0729 12:42:40.307768  260472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:42:40.307825  260472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:42:40.312866  260472 start.go:563] Will wait 60s for crictl version
	I0729 12:42:40.312915  260472 ssh_runner.go:195] Run: which crictl
	I0729 12:42:40.316658  260472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:42:40.356691  260472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:42:40.356775  260472 ssh_runner.go:195] Run: crio --version
	I0729 12:42:40.385190  260472 ssh_runner.go:195] Run: crio --version
	I0729 12:42:40.417948  260472 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:42:40.419401  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:42:40.422540  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:40.422892  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:40.422937  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:40.423110  260472 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:42:40.427910  260472 kubeadm.go:883] updating cluster {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:42:40.428052  260472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:42:40.428107  260472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:42:40.473605  260472 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:42:40.473627  260472 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:42:40.473677  260472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:42:40.600040  260472 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:42:40.600073  260472 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:42:40.600100  260472 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0729 12:42:40.600218  260472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-767488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:42:40.600301  260472 ssh_runner.go:195] Run: crio config
	I0729 12:42:40.713091  260472 cni.go:84] Creating CNI manager for ""
	I0729 12:42:40.713114  260472 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:42:40.713124  260472 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:42:40.713150  260472 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-767488 NodeName:ha-767488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:42:40.713297  260472 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-767488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:42:40.713315  260472 kube-vip.go:115] generating kube-vip config ...
	I0729 12:42:40.713354  260472 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 12:42:40.731149  260472 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 12:42:40.731283  260472 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 12:42:40.731354  260472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:42:40.745678  260472 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:42:40.745771  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 12:42:40.756067  260472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 12:42:40.779511  260472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:42:40.802104  260472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 12:42:40.819400  260472 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 12:42:40.835924  260472 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 12:42:40.840719  260472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:42:40.986870  260472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:42:41.001565  260472 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488 for IP: 192.168.39.217
	I0729 12:42:41.001593  260472 certs.go:194] generating shared ca certs ...
	I0729 12:42:41.001614  260472 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:42:41.001819  260472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:42:41.001875  260472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:42:41.001890  260472 certs.go:256] generating profile certs ...
	I0729 12:42:41.001972  260472 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/client.key
	I0729 12:42:41.002032  260472 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293
	I0729 12:42:41.002065  260472 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key
	I0729 12:42:41.002076  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:42:41.002091  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:42:41.002113  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:42:41.002131  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:42:41.002148  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:42:41.002165  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:42:41.002182  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:42:41.002198  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:42:41.002263  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 12:42:41.002296  260472 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 12:42:41.002305  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:42:41.002328  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:42:41.002348  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:42:41.002370  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:42:41.002406  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:42:41.002434  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.002446  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.002458  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.003070  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:42:41.027259  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:42:41.050547  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:42:41.074374  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:42:41.097416  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 12:42:41.120537  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:42:41.143944  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:42:41.166548  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:42:41.189375  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 12:42:41.212392  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 12:42:41.235698  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:42:41.258918  260472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:42:41.275147  260472 ssh_runner.go:195] Run: openssl version
	I0729 12:42:41.281163  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:42:41.291624  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.296196  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.296247  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.301759  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:42:41.310741  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 12:42:41.320986  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.325289  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.325343  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.331301  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 12:42:41.341279  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 12:42:41.351883  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.355957  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.356029  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.361571  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:42:41.370434  260472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:42:41.374797  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:42:41.380122  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:42:41.385653  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:42:41.391013  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:42:41.396652  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:42:41.402042  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:42:41.407437  260472 kubeadm.go:392] StartCluster: {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:42:41.407562  260472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:42:41.407600  260472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:42:41.448602  260472 cri.go:89] found id: "c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000"
	I0729 12:42:41.448629  260472 cri.go:89] found id: "6f541b63f34e8eeb46f9636fcd9f0442b732b33fe15a4bb1e996edfc3adf2fe8"
	I0729 12:42:41.448633  260472 cri.go:89] found id: "18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a"
	I0729 12:42:41.448637  260472 cri.go:89] found id: "66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b"
	I0729 12:42:41.448639  260472 cri.go:89] found id: "149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f"
	I0729 12:42:41.448643  260472 cri.go:89] found id: "7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85"
	I0729 12:42:41.448645  260472 cri.go:89] found id: "d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b"
	I0729 12:42:41.448647  260472 cri.go:89] found id: "88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770"
	I0729 12:42:41.448650  260472 cri.go:89] found id: "45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722"
	I0729 12:42:41.448655  260472 cri.go:89] found id: "76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00"
	I0729 12:42:41.448657  260472 cri.go:89] found id: "a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad"
	I0729 12:42:41.448660  260472 cri.go:89] found id: "5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887"
	I0729 12:42:41.448662  260472 cri.go:89] found id: "5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf"
	I0729 12:42:41.448665  260472 cri.go:89] found id: "c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d"
	I0729 12:42:41.448671  260472 cri.go:89] found id: "ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0"
	I0729 12:42:41.448673  260472 cri.go:89] found id: "e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316"
	I0729 12:42:41.448676  260472 cri.go:89] found id: "a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1"
	I0729 12:42:41.448680  260472 cri.go:89] found id: "f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb"
	I0729 12:42:41.448682  260472 cri.go:89] found id: "dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a"
	I0729 12:42:41.448685  260472 cri.go:89] found id: ""
	I0729 12:42:41.448727  260472 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.170635976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4abf885-fa32-4ef4-8b7a-e9f5d42856f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.171787703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4abf885-fa32-4ef4-8b7a-e9f5d42856f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.176172311Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2aa65501-9dc2-45dc-baa5-bcc201eebd28 name=/runtime.v1.ImageService/ListImages
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.177272766Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315],Size_:117609954,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7 registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e],Size_:112198984,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{
Id:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266 registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4],Size_:63051080,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,RepoTags:[registry.k8s.io/kube-proxy:v1.30.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80 registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65],Size_:85953945,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 re
gistry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d8
67d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,RepoTags:[docker.io/kindest/kindnetd:v20240715-585640e9],RepoDigests:[docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115 docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493],Size_:87165492,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kub
e-vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,RepoTags:[docker.io/kindest/kindnetd:v20240719-e7903573],RepoDigests:[docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9 docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a],Size_:87174707,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=2aa65501-9dc2-45dc-baa5-bcc201eebd28 name=/runtim
e.v1.ImageService/ListImages
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.248356897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f3afa1f-f087-4bbd-b052-9d0773dfc117 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.248450156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f3afa1f-f087-4bbd-b052-9d0773dfc117 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.249490651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f6cbb6e-51a6-4b7e-a76d-8745ae879076 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.250030594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257309250004122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f6cbb6e-51a6-4b7e-a76d-8745ae879076 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.250583990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d2c618a-275b-4bfd-9e48-0165754109b3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.250641115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d2c618a-275b-4bfd-9e48-0165754109b3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.251200261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d2c618a-275b-4bfd-9e48-0165754109b3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.294348282Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05eb74ff-0265-4300-8fe4-7a5deaf7c9cc name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.294436661Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05eb74ff-0265-4300-8fe4-7a5deaf7c9cc name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.295595666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d22e0a39-12c1-44b0-a5d9-e28a9d4f3e6d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.296314237Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257309296286804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d22e0a39-12c1-44b0-a5d9-e28a9d4f3e6d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.297152677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84e230c6-ca98-4abb-9e3d-6e99c70ca4d4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.297230818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84e230c6-ca98-4abb-9e3d-6e99c70ca4d4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.298911466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84e230c6-ca98-4abb-9e3d-6e99c70ca4d4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.374095947Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41d5d4ee-ed2f-4bb0-9074-33e1d4e0d06a name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.374205814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41d5d4ee-ed2f-4bb0-9074-33e1d4e0d06a name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.384336508Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d0e5633-012a-47ad-95de-8ead3168dd4d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.384960780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257309384781661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d0e5633-012a-47ad-95de-8ead3168dd4d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.385498022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5eed12cb-babe-48a2-8a23-720fd14d8848 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.385594669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5eed12cb-babe-48a2-8a23-720fd14d8848 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:29 ha-767488 crio[6770]: time="2024-07-29 12:48:29.386235437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5eed12cb-babe-48a2-8a23-720fd14d8848 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f525ef9d81722       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   About a minute ago   Running             kube-controller-manager   9                   309c197fc5d30       kube-controller-manager-ha-767488
	e93850281207e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   2 minutes ago        Running             kube-apiserver            4                   8ed4d1a8b9e49       kube-apiserver-ha-767488
	2276a6710daab       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   3 minutes ago        Exited              kube-controller-manager   8                   309c197fc5d30       kube-controller-manager-ha-767488
	cf19aeac69879       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 minutes ago        Running             storage-provisioner       5                   69a46f7b3f55b       storage-provisioner
	29fa3f76ed7cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago        Exited              storage-provisioner       4                   69a46f7b3f55b       storage-provisioner
	25b0c852c73cd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   5 minutes ago        Running             busybox                   2                   10b4f76c89c4a       busybox-fc5497c4f-trgfp
	8e5267677fe3d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   5 minutes ago        Running             busybox                   2                   a2229e9d163bb       busybox-fc5497c4f-4ppv4
	76489ee06a477       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   5 minutes ago        Exited              kube-apiserver            3                   8ed4d1a8b9e49       kube-apiserver-ha-767488
	9d1de005960b4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   5 minutes ago        Running             coredns                   2                   00984bd8001fe       coredns-7db6d8ff4d-qqt5t
	b384618e5ae14       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   5 minutes ago        Running             kube-vip                  2                   2c2519cb3cb91       kube-vip-ha-767488
	cc3c94fe6246a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   5 minutes ago        Running             kube-proxy                2                   33794e3552983       kube-proxy-sqk96
	2d5168de1ca60       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   5 minutes ago        Running             coredns                   2                   b2a043288f89f       coredns-7db6d8ff4d-k6r5l
	aa1dfc42a005d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   5 minutes ago        Running             kube-scheduler            2                   6be19c0f23e95       kube-scheduler-ha-767488
	b50ae6e8e38f6       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46   5 minutes ago        Running             kindnet-cni               2                   aab24bd3a9edf       kindnet-6x56p
	a311cff0c8ecc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   5 minutes ago        Running             etcd                      2                   b2a875cf8cfc1       etcd-ha-767488
	18d7603603557       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   10 minutes ago       Exited              kube-vip                  1                   4ac1d50b066bb       kube-vip-ha-767488
	3f1e978a01d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   17 minutes ago       Exited              busybox                   1                   6ff1b7f6ad731       busybox-fc5497c4f-4ppv4
	cbbea78e99e72       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   17 minutes ago       Exited              busybox                   1                   a7dc5254878c7       busybox-fc5497c4f-trgfp
	d899a73918641       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago       Exited              coredns                   1                   464e80f1474da       coredns-7db6d8ff4d-k6r5l
	88ec5aa0ed7ec       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   18 minutes ago       Exited              kube-proxy                1                   4e921577c4923       kube-proxy-sqk96
	45379775c471b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago       Exited              coredns                   1                   6fd6fea36e81f       coredns-7db6d8ff4d-qqt5t
	a327747c60c54       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46   18 minutes ago       Exited              kindnet-cni               1                   ebff2bebd5529       kindnet-6x56p
	5e886bb5a4a2e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   18 minutes ago       Exited              kube-scheduler            1                   4d030101f0f82       kube-scheduler-ha-767488
	5c8cded716df9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 minutes ago       Exited              etcd                      1                   c38a2d43be153       etcd-ha-767488
	
	
	==> coredns [2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[283503875]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:42:53.754) (total time: 10001ms):
	Trace[283503875]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:43:03.755)
	Trace[283503875]: [10.001379228s] [10.001379228s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[841416442]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.890) (total time: 11819ms):
	Trace[841416442]: ---"Objects listed" error:Unauthorized 11819ms (12:40:27.709)
	Trace[841416442]: [11.819152896s] [11.819152896s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2022085669]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.047) (total time: 12661ms):
	Trace[2022085669]: ---"Objects listed" error:Unauthorized 12661ms (12:40:27.709)
	Trace[2022085669]: [12.66151731s] [12.66151731s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1130676405]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:32.086) (total time: 10721ms):
	Trace[1130676405]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 10720ms (12:40:42.807)
	Trace[1130676405]: [10.721021558s] [10.721021558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b] <==
	Trace[394769481]: [10.001110422s] [10.001110422s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53396->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53396->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53384->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53384->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53380->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53380->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1030282606]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.356) (total time: 12346ms):
	Trace[1030282606]: ---"Objects listed" error:Unauthorized 12346ms (12:40:27.702)
	Trace[1030282606]: [12.346347085s] [12.346347085s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[418228940]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.563) (total time: 12139ms):
	Trace[418228940]: ---"Objects listed" error:Unauthorized 12138ms (12:40:27.702)
	Trace[418228940]: [12.139191986s] [12.139191986s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[2011977158]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:31.350) (total time: 11455ms):
	Trace[2011977158]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 11455ms (12:40:42.805)
	Trace[2011977158]: [11.45543795s] [11.45543795s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3048": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3048": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[856661345]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:31.528) (total time: 11278ms):
	Trace[856661345]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 11278ms (12:40:42.807)
	Trace[856661345]: [11.278535864s] [11.278535864s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-767488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:48:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:43:36 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:43:36 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:43:36 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:43:36 +0000   Mon, 29 Jul 2024 12:21:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-767488
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4910accb98434efca56ff8b39068800c
	  System UUID:                4910accb-9843-4efc-a56f-f8b39068800c
	  Boot ID:                    f538ab8c-89b7-40ce-b82e-7644a867ee15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4ppv4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  default                     busybox-fc5497c4f-trgfp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 coredns-7db6d8ff4d-k6r5l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-qqt5t             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-767488                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-6x56p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-767488             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-767488    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-sqk96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-767488             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-767488                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 5m3s                 kube-proxy       
	  Normal   Starting                 17m                  kube-proxy       
	  Normal   Starting                 27m                  kube-proxy       
	  Normal   Starting                 27m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  27m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)    kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)    kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)    kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientMemory  27m                  kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     27m                  kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   Starting                 27m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  27m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    27m                  kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           27m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   NodeReady                26m                  kubelet          Node ha-767488 status is now: NodeReady
	  Normal   RegisteredNode           25m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           24m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           22m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Warning  ContainerGCFailed        6m23s (x4 over 19m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m52s                node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           92s                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           28s                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	
	
	Name:               ha-767488-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_22_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:22:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:48:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:44:13 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:44:13 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:44:13 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:44:13 +0000   Mon, 29 Jul 2024 12:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-767488-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9a3fe2d6456464f8574d4c1d95e4f21
	  System UUID:                d9a3fe2d-6456-464f-8574-d4c1d95e4f21
	  Boot ID:                    5cef9760-b094-4a5a-943c-bf1eb8a249d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jjx77                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-767488-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kindnet-l7jpd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-767488-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-ha-767488-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-d9lg8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-767488-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-vip-ha-767488-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m35s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 22m                    kube-proxy       
	  Normal   Starting                 26m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26m (x8 over 26m)      kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26m (x8 over 26m)      kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26m (x7 over 26m)      kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           25m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           25m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           24m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   NodeAllocatableEnforced  22m                    kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 22m                    kubelet          Node ha-767488-m02 has been rebooted, boot id: 9ab58707-555a-4bb6-83c9-2399f8c434d4
	  Normal   Starting                 22m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22m                    kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22m                    kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22m                    kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           22m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Warning  ContainerGCFailed        17m                    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           16m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   Starting                 5m24s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m52s                  node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           92s                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           28s                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	
	
	Name:               ha-767488-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_23_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:23:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:48:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-767488-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ca168b2de41451a82ff59b787c535ad
	  System UUID:                5ca168b2-de41-451a-82ff-59b787c535ad
	  Boot ID:                    f8572197-e522-4d4c-92d1-3c0e30179060
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-767488-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-bz9pp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-767488-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-767488-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-tzj27                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-767488-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-767488-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 43s                kube-proxy       
	  Normal   Starting                 24m                kube-proxy       
	  Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           24m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           24m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           24m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           22m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   NodeNotReady             21m                node-controller  Node ha-767488-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           16m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           4m52s              node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           92s                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s (x2 over 62s)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x2 over 62s)  kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x2 over 62s)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 61s                kubelet          Node ha-767488-m03 has been rebooted, boot id: f8572197-e522-4d4c-92d1-3c0e30179060
	  Normal   NodeReady                61s                kubelet          Node ha-767488-m03 status is now: NodeReady
	  Normal   RegisteredNode           28s                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	
	
	Name:               ha-767488-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_24_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:24:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:48:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:48:20 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:48:20 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:48:20 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:48:20 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-767488-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 326a5fb51aae42b7b8056fc3c9e53faf
	  System UUID:                326a5fb5-1aae-42b7-b805-6fc3c9e53faf
	  Boot ID:                    5525fcaf-d53c-41e4-a857-9519defa86cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bgb2n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-2m5gr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   Starting                 23m                kube-proxy       
	  Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           23m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   NodeReady                23m                kubelet          Node ha-767488-m04 status is now: NodeReady
	  Normal   RegisteredNode           22m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   NodeNotReady             21m                node-controller  Node ha-767488-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           16m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           4m53s              node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           93s                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           29s                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   Starting                 11s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  11s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10s (x2 over 10s)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10s (x2 over 10s)  kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10s (x2 over 10s)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 10s                kubelet          Node ha-767488-m04 has been rebooted, boot id: 5525fcaf-d53c-41e4-a857-9519defa86cc
	  Normal   NodeReady                10s                kubelet          Node ha-767488-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[ +10.417395] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.584590] kauditd_printk_skb: 34 callbacks suppressed
	[Jul29 12:22] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 12:25] kauditd_printk_skb: 10 callbacks suppressed
	[Jul29 12:30] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +0.152481] systemd-fstab-generator[3296]: Ignoring "noauto" option for root device
	[  +0.201233] systemd-fstab-generator[3310]: Ignoring "noauto" option for root device
	[  +0.141805] systemd-fstab-generator[3322]: Ignoring "noauto" option for root device
	[  +0.319718] systemd-fstab-generator[3350]: Ignoring "noauto" option for root device
	[  +4.887179] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.088800] kauditd_printk_skb: 100 callbacks suppressed
	[  +9.359831] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.040803] kauditd_printk_skb: 30 callbacks suppressed
	[ +16.902160] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.806405] kauditd_printk_skb: 5 callbacks suppressed
	[Jul29 12:42] systemd-fstab-generator[6690]: Ignoring "noauto" option for root device
	[  +0.156941] systemd-fstab-generator[6701]: Ignoring "noauto" option for root device
	[  +0.179346] systemd-fstab-generator[6715]: Ignoring "noauto" option for root device
	[  +0.150490] systemd-fstab-generator[6727]: Ignoring "noauto" option for root device
	[  +0.290091] systemd-fstab-generator[6755]: Ignoring "noauto" option for root device
	[  +8.442471] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.232725] systemd-fstab-generator[6987]: Ignoring "noauto" option for root device
	[  +4.882020] kauditd_printk_skb: 101 callbacks suppressed
	[Jul29 12:43] kauditd_printk_skb: 11 callbacks suppressed
	[ +20.903422] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf] <==
	{"level":"info","ts":"2024-07-29T12:40:51.117051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd [logterm: 7, index: 3480] sent MsgPreVote request to d9000071a51f92ea at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:52.278962Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T12:40:52.279017Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	{"level":"warn","ts":"2024-07-29T12:40:52.279122Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.279145Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.290948Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.291007Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T12:40:52.291066Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a09c9983ac28f1fd","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T12:40:52.291289Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291329Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291362Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291459Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291525Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291589Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291602Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291608Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.29172Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291751Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291782Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291865Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.304523Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:40:52.30477Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:40:52.304858Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> etcd [a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828] <==
	{"level":"warn","ts":"2024-07-29T12:47:10.268021Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-29T12:47:15.268948Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-29T12:47:15.269045Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-29T12:47:20.26945Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-29T12:47:20.269572Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-29T12:47:25.269545Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:25.26973Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:30.269867Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:30.270071Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:35.270164Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:35.270231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:40.271157Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:40.271332Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T12:47:44.279992Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:47:44.280139Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:47:44.280224Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:47:44.287376Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"d9000071a51f92ea","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T12:47:44.287506Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:47:44.291104Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"d9000071a51f92ea","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T12:47:44.291154Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"warn","ts":"2024-07-29T12:47:44.965528Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.230668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-767488-m03\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-29T12:47:44.966066Z","caller":"traceutil/trace.go:171","msg":"trace[453902694] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-767488-m03; range_end:; response_count:1; response_revision:3782; }","duration":"106.799466ms","start":"2024-07-29T12:47:44.859168Z","end":"2024-07-29T12:47:44.965968Z","steps":["trace[453902694] 'range keys from in-memory index tree'  (duration: 103.837804ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:47:50.971028Z","caller":"traceutil/trace.go:171","msg":"trace[1827493490] linearizableReadLoop","detail":"{readStateIndex:4330; appliedIndex:4330; }","duration":"111.574421ms","start":"2024-07-29T12:47:50.859424Z","end":"2024-07-29T12:47:50.970999Z","steps":["trace[1827493490] 'read index received'  (duration: 111.56917ms)","trace[1827493490] 'applied index is now lower than readState.Index'  (duration: 3.955µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:47:50.971255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.808091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-767488-m03\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-29T12:47:50.971313Z","caller":"traceutil/trace.go:171","msg":"trace[1184293185] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-767488-m03; range_end:; response_count:1; response_revision:3807; }","duration":"111.898697ms","start":"2024-07-29T12:47:50.8594Z","end":"2024-07-29T12:47:50.971299Z","steps":["trace[1184293185] 'agreement among raft nodes before linearized reading'  (duration: 111.702182ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:48:30 up 28 min,  0 users,  load average: 0.72, 0.36, 0.31
	Linux ha-767488 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad] <==
	I0729 12:40:29.352753       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:29.352910       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:29.352935       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:40:29.353062       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:29.353084       1 main.go:299] handling current node
	I0729 12:40:39.356408       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:39.356467       1 main.go:299] handling current node
	I0729 12:40:39.356485       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:40:39.356493       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:40:39.356693       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:40:39.356728       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:39.356862       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:39.356895       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	W0729 12:40:42.805541       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	E0729 12:40:42.805601       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	I0729 12:40:49.352084       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:49.352122       1 main.go:299] handling current node
	I0729 12:40:49.352136       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:40:49.352140       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:40:49.352268       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:40:49.352274       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:49.352317       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:49.352321       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	W0729 12:40:50.573724       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	E0729 12:40:50.573775       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> kindnet [b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577] <==
	I0729 12:47:55.907962       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:48:05.899403       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:48:05.899505       1 main.go:299] handling current node
	I0729 12:48:05.899541       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:48:05.899572       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:48:05.899747       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:48:05.899783       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:48:05.900023       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:48:05.900055       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:48:15.907553       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:48:15.907654       1 main.go:299] handling current node
	I0729 12:48:15.907674       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:48:15.907683       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:48:15.907923       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:48:15.907962       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:48:15.908062       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:48:15.908093       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:48:25.900373       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:48:25.900491       1 main.go:299] handling current node
	I0729 12:48:25.900524       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:48:25.900544       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:48:25.900716       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:48:25.900741       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:48:25.900893       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:48:25.900921       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf] <==
	I0729 12:42:49.430000       1 options.go:221] external host was not specified, using 192.168.39.217
	I0729 12:42:49.431238       1 server.go:148] Version: v1.30.3
	I0729 12:42:49.431325       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:42:49.866306       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 12:42:49.878045       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:42:49.881570       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 12:42:49.881667       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 12:42:49.881920       1 instance.go:299] Using reconciler: lease
	W0729 12:43:09.865456       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 12:43:09.865695       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 12:43:09.882923       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15] <==
	I0729 12:45:59.628077       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 12:45:59.628961       1 aggregator.go:163] waiting for initial CRD sync...
	I0729 12:45:59.629024       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0729 12:45:59.629373       1 available_controller.go:423] Starting AvailableConditionController
	I0729 12:45:59.638565       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0729 12:45:59.676095       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 12:45:59.693227       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:45:59.693266       1 policy_source.go:224] refreshing policies
	I0729 12:45:59.696534       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 12:45:59.726284       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 12:45:59.731995       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 12:45:59.732057       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 12:45:59.732134       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 12:45:59.732180       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 12:45:59.732200       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 12:45:59.732874       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 12:45:59.733570       1 aggregator.go:165] initial CRD sync complete...
	I0729 12:45:59.733620       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 12:45:59.733627       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 12:45:59.733632       1 cache.go:39] Caches are synced for autoregister controller
	I0729 12:45:59.738603       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 12:46:00.638385       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 12:46:01.060195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.45]
	I0729 12:46:01.061970       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 12:46:01.070860       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b] <==
	I0729 12:45:07.191616       1 serving.go:380] Generated self-signed cert in-memory
	I0729 12:45:07.718594       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 12:45:07.718688       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:45:07.721330       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 12:45:07.723008       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:45:07.723288       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 12:45:07.723400       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 12:45:17.724994       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.217:8443/healthz\": dial tcp 192.168.39.217:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0] <==
	I0729 12:46:57.268379       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 12:46:57.274850       1 shared_informer.go:320] Caches are synced for service account
	I0729 12:46:57.279272       1 shared_informer.go:320] Caches are synced for job
	I0729 12:46:57.281941       1 shared_informer.go:320] Caches are synced for HPA
	I0729 12:46:57.302717       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m04"
	I0729 12:46:57.302855       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488"
	I0729 12:46:57.302885       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m02"
	I0729 12:46:57.302912       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m03"
	I0729 12:46:57.303103       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 12:46:57.306059       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 12:46:57.316545       1 shared_informer.go:320] Caches are synced for disruption
	I0729 12:46:57.355364       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 12:46:57.366896       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 12:46:57.410323       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 12:46:57.414574       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:46:57.464419       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:46:57.471777       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 12:46:57.890551       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:46:57.958671       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:46:57.958714       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 12:47:28.903601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.335µs"
	I0729 12:47:29.046385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.149µs"
	I0729 12:47:29.064238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.904µs"
	I0729 12:47:29.065729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.519µs"
	I0729 12:48:20.069830       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-767488-m04"
	
	
	==> kube-proxy [88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770] <==
	E0729 12:38:58.686582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.830714       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.830972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.831090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.831124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.831182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.831211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:14.047354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:14.048013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:14.047870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:14.048111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:17.119592       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:17.119666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:35.551618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:35.551700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:35.551992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:35.552162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:41.695764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:41.696014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:06.272423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:06.272868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:21.631308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:21.631558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:24.703209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:24.703408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0] <==
	I0729 12:43:26.005579       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:43:26.005921       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:43:26.005961       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:43:26.008082       1 config.go:192] "Starting service config controller"
	I0729 12:43:26.008129       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:43:26.008154       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:43:26.008158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:43:26.009228       1 config.go:319] "Starting node config controller"
	I0729 12:43:26.009261       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0729 12:43:29.022916       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:43:29.023565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.023756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:29.023657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:29.023892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.023968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.024040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.094724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.094895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094978       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.095150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:43:33.908457       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:43:34.210378       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:43:34.908288       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887] <==
	W0729 12:40:22.243325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 12:40:22.243419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 12:40:23.541590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 12:40:23.541652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 12:40:24.030114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:24.030218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:24.827144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:24.827194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:25.963020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:40:25.963127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 12:40:27.525553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:40:27.525717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:40:31.457216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:40:31.457249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:40:31.946204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:31.946255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:31.987696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:40:31.987742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:40:32.539286       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:40:32.539318       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:40:33.993576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:40:33.993629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:40:34.509160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:40:34.509295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:40:52.283637       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba] <==
	W0729 12:45:23.326174       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.217:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:23.326305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.217:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:24.458610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.217:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:24.458749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.217:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:31.373849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:31.374004       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:32.914014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.217:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:32.914137       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.217:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:33.370926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:33.371053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:38.194294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.217:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:38.194364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.217:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:39.021894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:39.022019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:40.035456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.217:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:40.035546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.217:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:41.325471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:41.325533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:53.830647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:53.830888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:54.268363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:54.268503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:54.603189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:54.603346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	I0729 12:46:02.189750       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:45:51 ha-767488 kubelet[1381]: I0729 12:45:51.667355    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:45:51 ha-767488 kubelet[1381]: E0729 12:45:51.667697    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:45:57 ha-767488 kubelet[1381]: I0729 12:45:57.666883    1381 scope.go:117] "RemoveContainer" containerID="76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf"
	Jul 29 12:46:04 ha-767488 kubelet[1381]: I0729 12:46:04.667382    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:46:04 ha-767488 kubelet[1381]: E0729 12:46:04.667691    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:46:06 ha-767488 kubelet[1381]: E0729 12:46:06.693101    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:46:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:46:19 ha-767488 kubelet[1381]: I0729 12:46:19.667641    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:46:19 ha-767488 kubelet[1381]: E0729 12:46:19.668493    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:46:32 ha-767488 kubelet[1381]: I0729 12:46:32.667323    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:46:32 ha-767488 kubelet[1381]: E0729 12:46:32.669010    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:46:45 ha-767488 kubelet[1381]: I0729 12:46:45.667906    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:47:06 ha-767488 kubelet[1381]: E0729 12:47:06.688205    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:47:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:48:06 ha-767488 kubelet[1381]: E0729 12:48:06.688745    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:48:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:48:28.873621  262628 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19341-233093/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-767488 -n ha-767488
helpers_test.go:261: (dbg) Run:  kubectl --context ha-767488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (459.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-767488" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-767488\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-767488\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"
APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-767488\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.217\",\"Port\":8443,\"Kubernet
esVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.45\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.210\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.181\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false
,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountI
P\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-767488 -n ha-767488
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 logs -n 25: (1.870985664s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m04 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp testdata/cp-test.txt                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488:/home/docker/cp-test_ha-767488-m04_ha-767488.txt                       |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488 sudo cat                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488.txt                                 |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m02:/home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03:/home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m03 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-767488 node stop m02 -v=7                                                     | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-767488 node start m02 -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488 -v=7                                                           | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-767488 -v=7                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	| node    | ha-767488 node delete m03 -v=7                                                   | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-767488 stop -v=7                                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true                                                         | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:40 UTC | 29 Jul 24 12:48 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:40:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:40:51.329866  260472 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:40:51.329974  260472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:40:51.329984  260472 out.go:304] Setting ErrFile to fd 2...
	I0729 12:40:51.329990  260472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:40:51.330183  260472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:40:51.330779  260472 out.go:298] Setting JSON to false
	I0729 12:40:51.331755  260472 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8594,"bootTime":1722248257,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:40:51.331823  260472 start.go:139] virtualization: kvm guest
	I0729 12:40:51.334313  260472 out.go:177] * [ha-767488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:40:51.335770  260472 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:40:51.335784  260472 notify.go:220] Checking for updates...
	I0729 12:40:51.338199  260472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:40:51.339561  260472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:40:51.340932  260472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:40:51.342165  260472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:40:51.343840  260472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:40:51.345700  260472 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:40:51.346109  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.346170  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.362742  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I0729 12:40:51.363165  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.363711  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.363735  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.364108  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.364327  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.364586  260472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:40:51.365000  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.365043  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.379978  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42067
	I0729 12:40:51.380389  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.380778  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.380814  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.381158  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.381323  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.415931  260472 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:40:51.417174  260472 start.go:297] selected driver: kvm2
	I0729 12:40:51.417189  260472 start.go:901] validating driver "kvm2" against &{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:40:51.417335  260472 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:40:51.417664  260472 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:40:51.417770  260472 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:40:51.432545  260472 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:40:51.433500  260472 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:40:51.433539  260472 cni.go:84] Creating CNI manager for ""
	I0729 12:40:51.433548  260472 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:40:51.433631  260472 start.go:340] cluster config:
	{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:40:51.433831  260472 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:40:51.435545  260472 out.go:177] * Starting "ha-767488" primary control-plane node in "ha-767488" cluster
	I0729 12:40:51.436699  260472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:40:51.436735  260472 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:40:51.436747  260472 cache.go:56] Caching tarball of preloaded images
	I0729 12:40:51.436866  260472 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:40:51.436877  260472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:40:51.437012  260472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:40:51.437194  260472 start.go:360] acquireMachinesLock for ha-767488: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:40:51.437233  260472 start.go:364] duration metric: took 21.45µs to acquireMachinesLock for "ha-767488"
	I0729 12:40:51.437247  260472 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:40:51.437253  260472 fix.go:54] fixHost starting: 
	I0729 12:40:51.437521  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.437552  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.451341  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0729 12:40:51.451741  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.452191  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.452220  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.452535  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.452723  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.452885  260472 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:40:51.454319  260472 fix.go:112] recreateIfNeeded on ha-767488: state=Running err=<nil>
	W0729 12:40:51.454350  260472 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:40:51.456154  260472 out.go:177] * Updating the running kvm2 "ha-767488" VM ...
	I0729 12:40:51.457351  260472 machine.go:94] provisionDockerMachine start ...
	I0729 12:40:51.457369  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.457584  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.459878  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.460266  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.460296  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.460395  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.460553  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.460704  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.460782  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.460935  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.461114  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.461124  260472 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:40:51.569205  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:40:51.569235  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.569499  260472 buildroot.go:166] provisioning hostname "ha-767488"
	I0729 12:40:51.569524  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.569693  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.572499  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.572988  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.573033  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.573160  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.573358  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.573548  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.573648  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.573898  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.574069  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.574089  260472 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767488 && echo "ha-767488" | sudo tee /etc/hostname
	I0729 12:40:51.701204  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:40:51.701229  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.703986  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.704423  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.704461  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.704639  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.704824  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.704975  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.705089  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.705288  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.705507  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.705531  260472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767488/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:40:51.817644  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:40:51.817684  260472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:40:51.817700  260472 buildroot.go:174] setting up certificates
	I0729 12:40:51.817709  260472 provision.go:84] configureAuth start
	I0729 12:40:51.817719  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.818054  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:40:51.820835  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.821225  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.821246  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.821413  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.823391  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.823759  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.823788  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.823928  260472 provision.go:143] copyHostCerts
	I0729 12:40:51.823969  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:40:51.824015  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 12:40:51.824028  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:40:51.824106  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:40:51.824213  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:40:51.824238  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 12:40:51.824248  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:40:51.824287  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:40:51.824345  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:40:51.824376  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 12:40:51.824384  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:40:51.824417  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:40:51.824477  260472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.ha-767488 san=[127.0.0.1 192.168.39.217 ha-767488 localhost minikube]
	I0729 12:40:52.006332  260472 provision.go:177] copyRemoteCerts
	I0729 12:40:52.006418  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:40:52.006452  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:52.009130  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.009520  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:52.009546  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.009704  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:52.009964  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.010156  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:52.010326  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:40:52.094644  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:40:52.094738  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:40:52.119444  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:40:52.119509  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 12:40:52.143660  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:40:52.143716  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:40:52.167324  260472 provision.go:87] duration metric: took 349.60091ms to configureAuth
	I0729 12:40:52.167355  260472 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:40:52.167557  260472 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:40:52.167627  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:52.170399  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.170750  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:52.170769  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.170976  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:52.171205  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.171383  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.171515  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:52.171707  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:52.171890  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:52.171904  260472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:42:30.662176  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:42:30.662209  260472 machine.go:97] duration metric: took 1m39.204842674s to provisionDockerMachine
	I0729 12:42:30.662225  260472 start.go:293] postStartSetup for "ha-767488" (driver="kvm2")
	I0729 12:42:30.662240  260472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:42:30.662263  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.662582  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:42:30.662612  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.665494  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.666063  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.666088  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.666235  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.666474  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.666633  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.666847  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:30.752735  260472 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:42:30.757792  260472 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:42:30.757820  260472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:42:30.757900  260472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:42:30.757994  260472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 12:42:30.758009  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 12:42:30.758096  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:42:30.768113  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:42:30.793284  260472 start.go:296] duration metric: took 131.040886ms for postStartSetup
	I0729 12:42:30.793328  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.793694  260472 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 12:42:30.793729  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.796515  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.796959  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.796985  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.797155  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.797360  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.797508  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.797632  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	W0729 12:42:30.883560  260472 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 12:42:30.883593  260472 fix.go:56] duration metric: took 1m39.446338951s for fixHost
	I0729 12:42:30.883619  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.886076  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.886458  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.886483  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.886633  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.886829  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.886996  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.887140  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.887303  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:42:30.887526  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:42:30.887541  260472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:42:30.997876  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722256950.957387407
	
	I0729 12:42:30.997906  260472 fix.go:216] guest clock: 1722256950.957387407
	I0729 12:42:30.997917  260472 fix.go:229] Guest: 2024-07-29 12:42:30.957387407 +0000 UTC Remote: 2024-07-29 12:42:30.883602483 +0000 UTC m=+99.589379345 (delta=73.784924ms)
	I0729 12:42:30.997948  260472 fix.go:200] guest clock delta is within tolerance: 73.784924ms
	I0729 12:42:30.997986  260472 start.go:83] releasing machines lock for "ha-767488", held for 1m39.560717836s
	I0729 12:42:30.998041  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.998327  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:42:31.000905  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.001304  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.001335  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.001531  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002184  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002392  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002499  260472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:42:31.002576  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:31.002622  260472 ssh_runner.go:195] Run: cat /version.json
	I0729 12:42:31.002652  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:31.005308  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005500  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005704  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.005737  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005887  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:31.006092  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:31.006208  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.006233  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.006272  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:31.006395  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:31.006459  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:31.006551  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:31.006697  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:31.006864  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:31.115893  260472 ssh_runner.go:195] Run: systemctl --version
	I0729 12:42:31.122469  260472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:42:31.297345  260472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:42:31.304517  260472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:42:31.304592  260472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:42:31.316445  260472 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:42:31.316475  260472 start.go:495] detecting cgroup driver to use...
	I0729 12:42:31.316547  260472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:42:31.333639  260472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:42:31.349241  260472 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:42:31.349303  260472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:42:31.364204  260472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:42:31.378300  260472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:42:31.534355  260472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:42:31.684660  260472 docker.go:233] disabling docker service ...
	I0729 12:42:31.684748  260472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:42:31.700676  260472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:42:31.715730  260472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:42:31.862044  260472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:42:32.012656  260472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:42:32.026627  260472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:42:32.048998  260472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:42:32.049086  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.060466  260472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:42:32.060565  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.071761  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.082721  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.094732  260472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:42:32.106637  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.117985  260472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.131937  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.142195  260472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:42:32.151406  260472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:42:32.160525  260472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:42:32.305601  260472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:42:40.307724  260472 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.002069181s)
	I0729 12:42:40.307768  260472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:42:40.307825  260472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:42:40.312866  260472 start.go:563] Will wait 60s for crictl version
	I0729 12:42:40.312915  260472 ssh_runner.go:195] Run: which crictl
	I0729 12:42:40.316658  260472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:42:40.356691  260472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:42:40.356775  260472 ssh_runner.go:195] Run: crio --version
	I0729 12:42:40.385190  260472 ssh_runner.go:195] Run: crio --version
	I0729 12:42:40.417948  260472 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:42:40.419401  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:42:40.422540  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:40.422892  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:40.422937  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:40.423110  260472 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:42:40.427910  260472 kubeadm.go:883] updating cluster {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:42:40.428052  260472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:42:40.428107  260472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:42:40.473605  260472 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:42:40.473627  260472 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:42:40.473677  260472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:42:40.600040  260472 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:42:40.600073  260472 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:42:40.600100  260472 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0729 12:42:40.600218  260472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-767488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:42:40.600301  260472 ssh_runner.go:195] Run: crio config
	I0729 12:42:40.713091  260472 cni.go:84] Creating CNI manager for ""
	I0729 12:42:40.713114  260472 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:42:40.713124  260472 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:42:40.713150  260472 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-767488 NodeName:ha-767488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:42:40.713297  260472 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-767488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:42:40.713315  260472 kube-vip.go:115] generating kube-vip config ...
	I0729 12:42:40.713354  260472 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 12:42:40.731149  260472 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 12:42:40.731283  260472 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 12:42:40.731354  260472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:42:40.745678  260472 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:42:40.745771  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 12:42:40.756067  260472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 12:42:40.779511  260472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:42:40.802104  260472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 12:42:40.819400  260472 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 12:42:40.835924  260472 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 12:42:40.840719  260472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:42:40.986870  260472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:42:41.001565  260472 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488 for IP: 192.168.39.217
	I0729 12:42:41.001593  260472 certs.go:194] generating shared ca certs ...
	I0729 12:42:41.001614  260472 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:42:41.001819  260472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:42:41.001875  260472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:42:41.001890  260472 certs.go:256] generating profile certs ...
	I0729 12:42:41.001972  260472 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/client.key
	I0729 12:42:41.002032  260472 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293
	I0729 12:42:41.002065  260472 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key
	I0729 12:42:41.002076  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:42:41.002091  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:42:41.002113  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:42:41.002131  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:42:41.002148  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:42:41.002165  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:42:41.002182  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:42:41.002198  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:42:41.002263  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 12:42:41.002296  260472 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 12:42:41.002305  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:42:41.002328  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:42:41.002348  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:42:41.002370  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:42:41.002406  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:42:41.002434  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.002446  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.002458  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.003070  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:42:41.027259  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:42:41.050547  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:42:41.074374  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:42:41.097416  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 12:42:41.120537  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:42:41.143944  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:42:41.166548  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:42:41.189375  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 12:42:41.212392  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 12:42:41.235698  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:42:41.258918  260472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:42:41.275147  260472 ssh_runner.go:195] Run: openssl version
	I0729 12:42:41.281163  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:42:41.291624  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.296196  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.296247  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.301759  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:42:41.310741  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 12:42:41.320986  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.325289  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.325343  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.331301  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 12:42:41.341279  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 12:42:41.351883  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.355957  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.356029  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.361571  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:42:41.370434  260472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:42:41.374797  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:42:41.380122  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:42:41.385653  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:42:41.391013  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:42:41.396652  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:42:41.402042  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:42:41.407437  260472 kubeadm.go:392] StartCluster: {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:42:41.407562  260472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:42:41.407600  260472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:42:41.448602  260472 cri.go:89] found id: "c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000"
	I0729 12:42:41.448629  260472 cri.go:89] found id: "6f541b63f34e8eeb46f9636fcd9f0442b732b33fe15a4bb1e996edfc3adf2fe8"
	I0729 12:42:41.448633  260472 cri.go:89] found id: "18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a"
	I0729 12:42:41.448637  260472 cri.go:89] found id: "66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b"
	I0729 12:42:41.448639  260472 cri.go:89] found id: "149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f"
	I0729 12:42:41.448643  260472 cri.go:89] found id: "7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85"
	I0729 12:42:41.448645  260472 cri.go:89] found id: "d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b"
	I0729 12:42:41.448647  260472 cri.go:89] found id: "88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770"
	I0729 12:42:41.448650  260472 cri.go:89] found id: "45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722"
	I0729 12:42:41.448655  260472 cri.go:89] found id: "76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00"
	I0729 12:42:41.448657  260472 cri.go:89] found id: "a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad"
	I0729 12:42:41.448660  260472 cri.go:89] found id: "5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887"
	I0729 12:42:41.448662  260472 cri.go:89] found id: "5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf"
	I0729 12:42:41.448665  260472 cri.go:89] found id: "c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d"
	I0729 12:42:41.448671  260472 cri.go:89] found id: "ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0"
	I0729 12:42:41.448673  260472 cri.go:89] found id: "e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316"
	I0729 12:42:41.448676  260472 cri.go:89] found id: "a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1"
	I0729 12:42:41.448680  260472 cri.go:89] found id: "f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb"
	I0729 12:42:41.448682  260472 cri.go:89] found id: "dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a"
	I0729 12:42:41.448685  260472 cri.go:89] found id: ""
	I0729 12:42:41.448727  260472 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.307009799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257312306981756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36c26d81-f99c-417b-8d7e-4956ddd70a4a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.307627035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81f084b2-780b-400e-b70d-9fa387f84af4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.307712475Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81f084b2-780b-400e-b70d-9fa387f84af4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.308293845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81f084b2-780b-400e-b70d-9fa387f84af4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.356327151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=734c4f67-d739-4ab6-adbd-f23cdf4a4311 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.356414996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=734c4f67-d739-4ab6-adbd-f23cdf4a4311 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.357621319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3d3d6be-513c-42f3-b8b1-644fdae591de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.358337433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257312358311598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3d3d6be-513c-42f3-b8b1-644fdae591de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.359073265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d4d25b9-0618-463a-82d8-5400a05eecab name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.359146056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d4d25b9-0618-463a-82d8-5400a05eecab name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.359597011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d4d25b9-0618-463a-82d8-5400a05eecab name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.411474084Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02bde2fc-f782-4a8e-8890-c1ed9a7748d7 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.411573825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02bde2fc-f782-4a8e-8890-c1ed9a7748d7 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.412644473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=592432de-5f66-418c-9ab0-f17034809ba1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.413459861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257312413434220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=592432de-5f66-418c-9ab0-f17034809ba1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.414002918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca54ea74-d3e1-44ef-a636-aa500cf80d5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.414076481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca54ea74-d3e1-44ef-a636-aa500cf80d5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.414745573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca54ea74-d3e1-44ef-a636-aa500cf80d5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.461965975Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b63b5e8-9b6b-43fa-bc2b-62f0a5aaa522 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.462059587Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b63b5e8-9b6b-43fa-bc2b-62f0a5aaa522 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.463135764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56fe5fa6-b857-440d-8c71-d2122df265b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.463628954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257312463600343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56fe5fa6-b857-440d-8c71-d2122df265b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.464198763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53f388d7-3fe8-4ba0-9fb5-59748d6deefd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.464251032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53f388d7-3fe8-4ba0-9fb5-59748d6deefd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:48:32 ha-767488 crio[6770]: time="2024-07-29 12:48:32.467026229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53f388d7-3fe8-4ba0-9fb5-59748d6deefd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f525ef9d81722       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   About a minute ago   Running             kube-controller-manager   9                   309c197fc5d30       kube-controller-manager-ha-767488
	e93850281207e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   2 minutes ago        Running             kube-apiserver            4                   8ed4d1a8b9e49       kube-apiserver-ha-767488
	2276a6710daab       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   3 minutes ago        Exited              kube-controller-manager   8                   309c197fc5d30       kube-controller-manager-ha-767488
	cf19aeac69879       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 minutes ago        Running             storage-provisioner       5                   69a46f7b3f55b       storage-provisioner
	29fa3f76ed7cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago        Exited              storage-provisioner       4                   69a46f7b3f55b       storage-provisioner
	25b0c852c73cd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   5 minutes ago        Running             busybox                   2                   10b4f76c89c4a       busybox-fc5497c4f-trgfp
	8e5267677fe3d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   5 minutes ago        Running             busybox                   2                   a2229e9d163bb       busybox-fc5497c4f-4ppv4
	76489ee06a477       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   5 minutes ago        Exited              kube-apiserver            3                   8ed4d1a8b9e49       kube-apiserver-ha-767488
	9d1de005960b4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   5 minutes ago        Running             coredns                   2                   00984bd8001fe       coredns-7db6d8ff4d-qqt5t
	b384618e5ae14       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   5 minutes ago        Running             kube-vip                  2                   2c2519cb3cb91       kube-vip-ha-767488
	cc3c94fe6246a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   5 minutes ago        Running             kube-proxy                2                   33794e3552983       kube-proxy-sqk96
	2d5168de1ca60       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   5 minutes ago        Running             coredns                   2                   b2a043288f89f       coredns-7db6d8ff4d-k6r5l
	aa1dfc42a005d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   5 minutes ago        Running             kube-scheduler            2                   6be19c0f23e95       kube-scheduler-ha-767488
	b50ae6e8e38f6       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46   5 minutes ago        Running             kindnet-cni               2                   aab24bd3a9edf       kindnet-6x56p
	a311cff0c8ecc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   5 minutes ago        Running             etcd                      2                   b2a875cf8cfc1       etcd-ha-767488
	18d7603603557       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   10 minutes ago       Exited              kube-vip                  1                   4ac1d50b066bb       kube-vip-ha-767488
	3f1e978a01d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   17 minutes ago       Exited              busybox                   1                   6ff1b7f6ad731       busybox-fc5497c4f-4ppv4
	cbbea78e99e72       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   17 minutes ago       Exited              busybox                   1                   a7dc5254878c7       busybox-fc5497c4f-trgfp
	d899a73918641       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago       Exited              coredns                   1                   464e80f1474da       coredns-7db6d8ff4d-k6r5l
	88ec5aa0ed7ec       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   18 minutes ago       Exited              kube-proxy                1                   4e921577c4923       kube-proxy-sqk96
	45379775c471b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago       Exited              coredns                   1                   6fd6fea36e81f       coredns-7db6d8ff4d-qqt5t
	a327747c60c54       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46   18 minutes ago       Exited              kindnet-cni               1                   ebff2bebd5529       kindnet-6x56p
	5e886bb5a4a2e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   18 minutes ago       Exited              kube-scheduler            1                   4d030101f0f82       kube-scheduler-ha-767488
	5c8cded716df9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 minutes ago       Exited              etcd                      1                   c38a2d43be153       etcd-ha-767488
	
	
	==> coredns [2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[283503875]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:42:53.754) (total time: 10001ms):
	Trace[283503875]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:43:03.755)
	Trace[283503875]: [10.001379228s] [10.001379228s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[841416442]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.890) (total time: 11819ms):
	Trace[841416442]: ---"Objects listed" error:Unauthorized 11819ms (12:40:27.709)
	Trace[841416442]: [11.819152896s] [11.819152896s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2022085669]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.047) (total time: 12661ms):
	Trace[2022085669]: ---"Objects listed" error:Unauthorized 12661ms (12:40:27.709)
	Trace[2022085669]: [12.66151731s] [12.66151731s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1130676405]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:32.086) (total time: 10721ms):
	Trace[1130676405]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 10720ms (12:40:42.807)
	Trace[1130676405]: [10.721021558s] [10.721021558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b] <==
	Trace[394769481]: [10.001110422s] [10.001110422s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53396->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53396->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53384->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53384->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53380->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53380->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1030282606]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.356) (total time: 12346ms):
	Trace[1030282606]: ---"Objects listed" error:Unauthorized 12346ms (12:40:27.702)
	Trace[1030282606]: [12.346347085s] [12.346347085s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[418228940]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.563) (total time: 12139ms):
	Trace[418228940]: ---"Objects listed" error:Unauthorized 12138ms (12:40:27.702)
	Trace[418228940]: [12.139191986s] [12.139191986s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[2011977158]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:31.350) (total time: 11455ms):
	Trace[2011977158]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 11455ms (12:40:42.805)
	Trace[2011977158]: [11.45543795s] [11.45543795s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3048": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3048": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[856661345]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:31.528) (total time: 11278ms):
	Trace[856661345]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 11278ms (12:40:42.807)
	Trace[856661345]: [11.278535864s] [11.278535864s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-767488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:48:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:43:36 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:43:36 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:43:36 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:43:36 +0000   Mon, 29 Jul 2024 12:21:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-767488
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4910accb98434efca56ff8b39068800c
	  System UUID:                4910accb-9843-4efc-a56f-f8b39068800c
	  Boot ID:                    f538ab8c-89b7-40ce-b82e-7644a867ee15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4ppv4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  default                     busybox-fc5497c4f-trgfp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 coredns-7db6d8ff4d-k6r5l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-qqt5t             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-767488                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-6x56p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-767488             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-767488    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-sqk96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-767488             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-767488                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 5m6s                 kube-proxy       
	  Normal   Starting                 17m                  kube-proxy       
	  Normal   Starting                 27m                  kube-proxy       
	  Normal   Starting                 27m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  27m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)    kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)    kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)    kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientMemory  27m                  kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     27m                  kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   Starting                 27m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  27m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    27m                  kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           27m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   NodeReady                27m                  kubelet          Node ha-767488 status is now: NodeReady
	  Normal   RegisteredNode           25m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           24m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           22m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Warning  ContainerGCFailed        6m26s (x4 over 19m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m55s                node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           95s                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           31s                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	
	
	Name:               ha-767488-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_22_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:22:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:48:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:44:13 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:44:13 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:44:13 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:44:13 +0000   Mon, 29 Jul 2024 12:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-767488-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9a3fe2d6456464f8574d4c1d95e4f21
	  System UUID:                d9a3fe2d-6456-464f-8574-d4c1d95e4f21
	  Boot ID:                    5cef9760-b094-4a5a-943c-bf1eb8a249d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jjx77                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-767488-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kindnet-l7jpd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-767488-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-ha-767488-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-d9lg8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-767488-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-vip-ha-767488-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m38s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 22m                    kube-proxy       
	  Normal   Starting                 26m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26m (x8 over 26m)      kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26m (x8 over 26m)      kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26m (x7 over 26m)      kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           25m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           24m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   NodeAllocatableEnforced  22m                    kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 22m                    kubelet          Node ha-767488-m02 has been rebooted, boot id: 9ab58707-555a-4bb6-83c9-2399f8c434d4
	  Normal   Starting                 22m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22m                    kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22m                    kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22m                    kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           22m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Warning  ContainerGCFailed        17m                    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           16m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   Starting                 5m27s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m27s (x7 over 5m27s)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m55s                  node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           95s                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           31s                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	
	
	Name:               ha-767488-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_23_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:23:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:48:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-767488-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ca168b2de41451a82ff59b787c535ad
	  System UUID:                5ca168b2-de41-451a-82ff-59b787c535ad
	  Boot ID:                    f8572197-e522-4d4c-92d1-3c0e30179060
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-767488-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-bz9pp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-767488-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-767488-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-tzj27                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-767488-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-767488-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 46s                kube-proxy       
	  Normal   Starting                 24m                kube-proxy       
	  Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           24m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           24m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           24m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           22m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   NodeNotReady             21m                node-controller  Node ha-767488-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           16m                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           4m56s              node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   Starting                 66s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  65s (x2 over 66s)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x2 over 66s)  kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x2 over 66s)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 65s                kubelet          Node ha-767488-m03 has been rebooted, boot id: f8572197-e522-4d4c-92d1-3c0e30179060
	  Normal   NodeReady                65s                kubelet          Node ha-767488-m03 status is now: NodeReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	
	
	Name:               ha-767488-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_24_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:24:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:48:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:48:20 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:48:20 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:48:20 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:48:20 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-767488-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 326a5fb51aae42b7b8056fc3c9e53faf
	  System UUID:                326a5fb5-1aae-42b7-b805-6fc3c9e53faf
	  Boot ID:                    5525fcaf-d53c-41e4-a857-9519defa86cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bgb2n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-2m5gr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 9s                 kube-proxy       
	  Normal   Starting                 23m                kube-proxy       
	  Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           23m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   NodeReady                23m                kubelet          Node ha-767488-m04 status is now: NodeReady
	  Normal   RegisteredNode           22m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   NodeNotReady             21m                node-controller  Node ha-767488-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           16m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           4m56s              node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           32s                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13s (x2 over 13s)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x2 over 13s)  kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x2 over 13s)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 13s                kubelet          Node ha-767488-m04 has been rebooted, boot id: 5525fcaf-d53c-41e4-a857-9519defa86cc
	  Normal   NodeReady                13s                kubelet          Node ha-767488-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[ +10.417395] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.584590] kauditd_printk_skb: 34 callbacks suppressed
	[Jul29 12:22] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 12:25] kauditd_printk_skb: 10 callbacks suppressed
	[Jul29 12:30] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +0.152481] systemd-fstab-generator[3296]: Ignoring "noauto" option for root device
	[  +0.201233] systemd-fstab-generator[3310]: Ignoring "noauto" option for root device
	[  +0.141805] systemd-fstab-generator[3322]: Ignoring "noauto" option for root device
	[  +0.319718] systemd-fstab-generator[3350]: Ignoring "noauto" option for root device
	[  +4.887179] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.088800] kauditd_printk_skb: 100 callbacks suppressed
	[  +9.359831] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.040803] kauditd_printk_skb: 30 callbacks suppressed
	[ +16.902160] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.806405] kauditd_printk_skb: 5 callbacks suppressed
	[Jul29 12:42] systemd-fstab-generator[6690]: Ignoring "noauto" option for root device
	[  +0.156941] systemd-fstab-generator[6701]: Ignoring "noauto" option for root device
	[  +0.179346] systemd-fstab-generator[6715]: Ignoring "noauto" option for root device
	[  +0.150490] systemd-fstab-generator[6727]: Ignoring "noauto" option for root device
	[  +0.290091] systemd-fstab-generator[6755]: Ignoring "noauto" option for root device
	[  +8.442471] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.232725] systemd-fstab-generator[6987]: Ignoring "noauto" option for root device
	[  +4.882020] kauditd_printk_skb: 101 callbacks suppressed
	[Jul29 12:43] kauditd_printk_skb: 11 callbacks suppressed
	[ +20.903422] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf] <==
	{"level":"info","ts":"2024-07-29T12:40:51.117051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd [logterm: 7, index: 3480] sent MsgPreVote request to d9000071a51f92ea at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:52.278962Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T12:40:52.279017Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	{"level":"warn","ts":"2024-07-29T12:40:52.279122Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.279145Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.290948Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.291007Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T12:40:52.291066Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a09c9983ac28f1fd","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T12:40:52.291289Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291329Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291362Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291459Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291525Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291589Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291602Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291608Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.29172Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291751Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291782Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291865Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.304523Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:40:52.30477Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:40:52.304858Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> etcd [a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828] <==
	{"level":"warn","ts":"2024-07-29T12:47:10.268021Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-29T12:47:15.268948Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-29T12:47:15.269045Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-29T12:47:20.26945Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-29T12:47:20.269572Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-29T12:47:25.269545Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:25.26973Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:30.269867Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:30.270071Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:35.270164Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:35.270231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:40.271157Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T12:47:40.271332Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9000071a51f92ea","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T12:47:44.279992Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:47:44.280139Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:47:44.280224Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:47:44.287376Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"d9000071a51f92ea","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T12:47:44.287506Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:47:44.291104Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"d9000071a51f92ea","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T12:47:44.291154Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"warn","ts":"2024-07-29T12:47:44.965528Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.230668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-767488-m03\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-29T12:47:44.966066Z","caller":"traceutil/trace.go:171","msg":"trace[453902694] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-767488-m03; range_end:; response_count:1; response_revision:3782; }","duration":"106.799466ms","start":"2024-07-29T12:47:44.859168Z","end":"2024-07-29T12:47:44.965968Z","steps":["trace[453902694] 'range keys from in-memory index tree'  (duration: 103.837804ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:47:50.971028Z","caller":"traceutil/trace.go:171","msg":"trace[1827493490] linearizableReadLoop","detail":"{readStateIndex:4330; appliedIndex:4330; }","duration":"111.574421ms","start":"2024-07-29T12:47:50.859424Z","end":"2024-07-29T12:47:50.970999Z","steps":["trace[1827493490] 'read index received'  (duration: 111.56917ms)","trace[1827493490] 'applied index is now lower than readState.Index'  (duration: 3.955µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:47:50.971255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.808091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-767488-m03\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-29T12:47:50.971313Z","caller":"traceutil/trace.go:171","msg":"trace[1184293185] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-767488-m03; range_end:; response_count:1; response_revision:3807; }","duration":"111.898697ms","start":"2024-07-29T12:47:50.8594Z","end":"2024-07-29T12:47:50.971299Z","steps":["trace[1184293185] 'agreement among raft nodes before linearized reading'  (duration: 111.702182ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:48:33 up 28 min,  0 users,  load average: 0.66, 0.36, 0.31
	Linux ha-767488 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad] <==
	I0729 12:40:29.352753       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:29.352910       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:29.352935       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:40:29.353062       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:29.353084       1 main.go:299] handling current node
	I0729 12:40:39.356408       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:39.356467       1 main.go:299] handling current node
	I0729 12:40:39.356485       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:40:39.356493       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:40:39.356693       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:40:39.356728       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:39.356862       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:39.356895       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	W0729 12:40:42.805541       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	E0729 12:40:42.805601       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	I0729 12:40:49.352084       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:49.352122       1 main.go:299] handling current node
	I0729 12:40:49.352136       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:40:49.352140       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:40:49.352268       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:40:49.352274       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:49.352317       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:49.352321       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	W0729 12:40:50.573724       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	E0729 12:40:50.573775       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> kindnet [b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577] <==
	I0729 12:47:55.907962       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:48:05.899403       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:48:05.899505       1 main.go:299] handling current node
	I0729 12:48:05.899541       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:48:05.899572       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:48:05.899747       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:48:05.899783       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:48:05.900023       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:48:05.900055       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:48:15.907553       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:48:15.907654       1 main.go:299] handling current node
	I0729 12:48:15.907674       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:48:15.907683       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:48:15.907923       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:48:15.907962       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:48:15.908062       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:48:15.908093       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:48:25.900373       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:48:25.900491       1 main.go:299] handling current node
	I0729 12:48:25.900524       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:48:25.900544       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:48:25.900716       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:48:25.900741       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:48:25.900893       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:48:25.900921       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf] <==
	I0729 12:42:49.430000       1 options.go:221] external host was not specified, using 192.168.39.217
	I0729 12:42:49.431238       1 server.go:148] Version: v1.30.3
	I0729 12:42:49.431325       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:42:49.866306       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 12:42:49.878045       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:42:49.881570       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 12:42:49.881667       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 12:42:49.881920       1 instance.go:299] Using reconciler: lease
	W0729 12:43:09.865456       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 12:43:09.865695       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 12:43:09.882923       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15] <==
	I0729 12:45:59.628077       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 12:45:59.628961       1 aggregator.go:163] waiting for initial CRD sync...
	I0729 12:45:59.629024       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0729 12:45:59.629373       1 available_controller.go:423] Starting AvailableConditionController
	I0729 12:45:59.638565       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0729 12:45:59.676095       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 12:45:59.693227       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:45:59.693266       1 policy_source.go:224] refreshing policies
	I0729 12:45:59.696534       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 12:45:59.726284       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 12:45:59.731995       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 12:45:59.732057       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 12:45:59.732134       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 12:45:59.732180       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 12:45:59.732200       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 12:45:59.732874       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 12:45:59.733570       1 aggregator.go:165] initial CRD sync complete...
	I0729 12:45:59.733620       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 12:45:59.733627       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 12:45:59.733632       1 cache.go:39] Caches are synced for autoregister controller
	I0729 12:45:59.738603       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 12:46:00.638385       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 12:46:01.060195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.45]
	I0729 12:46:01.061970       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 12:46:01.070860       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b] <==
	I0729 12:45:07.191616       1 serving.go:380] Generated self-signed cert in-memory
	I0729 12:45:07.718594       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 12:45:07.718688       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:45:07.721330       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 12:45:07.723008       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:45:07.723288       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 12:45:07.723400       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 12:45:17.724994       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.217:8443/healthz\": dial tcp 192.168.39.217:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0] <==
	I0729 12:46:57.268379       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 12:46:57.274850       1 shared_informer.go:320] Caches are synced for service account
	I0729 12:46:57.279272       1 shared_informer.go:320] Caches are synced for job
	I0729 12:46:57.281941       1 shared_informer.go:320] Caches are synced for HPA
	I0729 12:46:57.302717       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m04"
	I0729 12:46:57.302855       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488"
	I0729 12:46:57.302885       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m02"
	I0729 12:46:57.302912       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m03"
	I0729 12:46:57.303103       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 12:46:57.306059       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 12:46:57.316545       1 shared_informer.go:320] Caches are synced for disruption
	I0729 12:46:57.355364       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 12:46:57.366896       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 12:46:57.410323       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 12:46:57.414574       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:46:57.464419       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:46:57.471777       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 12:46:57.890551       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:46:57.958671       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:46:57.958714       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 12:47:28.903601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.335µs"
	I0729 12:47:29.046385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.149µs"
	I0729 12:47:29.064238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.904µs"
	I0729 12:47:29.065729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.519µs"
	I0729 12:48:20.069830       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-767488-m04"
	
	
	==> kube-proxy [88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770] <==
	E0729 12:38:58.686582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.830714       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.830972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.831090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.831124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.831182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.831211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:14.047354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:14.048013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:14.047870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:14.048111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:17.119592       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:17.119666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:35.551618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:35.551700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:35.551992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:35.552162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:41.695764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:41.696014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:06.272423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:06.272868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:21.631308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:21.631558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:24.703209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:24.703408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0] <==
	I0729 12:43:26.005579       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:43:26.005921       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:43:26.005961       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:43:26.008082       1 config.go:192] "Starting service config controller"
	I0729 12:43:26.008129       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:43:26.008154       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:43:26.008158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:43:26.009228       1 config.go:319] "Starting node config controller"
	I0729 12:43:26.009261       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0729 12:43:29.022916       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:43:29.023565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.023756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:29.023657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:29.023892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.023968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.024040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.094724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.094895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094978       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.095150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:43:33.908457       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:43:34.210378       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:43:34.908288       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887] <==
	W0729 12:40:22.243325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 12:40:22.243419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 12:40:23.541590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 12:40:23.541652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 12:40:24.030114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:24.030218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:24.827144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:24.827194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:25.963020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:40:25.963127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 12:40:27.525553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:40:27.525717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:40:31.457216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:40:31.457249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:40:31.946204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:31.946255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:31.987696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:40:31.987742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:40:32.539286       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:40:32.539318       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:40:33.993576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:40:33.993629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:40:34.509160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:40:34.509295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:40:52.283637       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba] <==
	W0729 12:45:23.326174       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.217:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:23.326305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.217:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:24.458610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.217:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:24.458749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.217:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:31.373849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:31.374004       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:32.914014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.217:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:32.914137       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.217:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:33.370926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:33.371053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:38.194294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.217:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:38.194364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.217:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:39.021894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:39.022019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:40.035456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.217:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:40.035546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.217:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:41.325471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:41.325533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:53.830647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:53.830888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:54.268363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:54.268503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:54.603189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:54.603346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	I0729 12:46:02.189750       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:45:51 ha-767488 kubelet[1381]: I0729 12:45:51.667355    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:45:51 ha-767488 kubelet[1381]: E0729 12:45:51.667697    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:45:57 ha-767488 kubelet[1381]: I0729 12:45:57.666883    1381 scope.go:117] "RemoveContainer" containerID="76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf"
	Jul 29 12:46:04 ha-767488 kubelet[1381]: I0729 12:46:04.667382    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:46:04 ha-767488 kubelet[1381]: E0729 12:46:04.667691    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:46:06 ha-767488 kubelet[1381]: E0729 12:46:06.693101    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:46:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:46:19 ha-767488 kubelet[1381]: I0729 12:46:19.667641    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:46:19 ha-767488 kubelet[1381]: E0729 12:46:19.668493    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:46:32 ha-767488 kubelet[1381]: I0729 12:46:32.667323    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:46:32 ha-767488 kubelet[1381]: E0729 12:46:32.669010    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:46:45 ha-767488 kubelet[1381]: I0729 12:46:45.667906    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:47:06 ha-767488 kubelet[1381]: E0729 12:47:06.688205    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:47:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:48:06 ha-767488 kubelet[1381]: E0729 12:48:06.688745    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:48:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:48:31.979343  262797 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19341-233093/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-767488 -n ha-767488
helpers_test.go:261: (dbg) Run:  kubectl --context ha-767488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-767488 --control-plane -v=7 --alsologtostderr
E0729 12:49:27.881411  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-767488 --control-plane -v=7 --alsologtostderr: (1m22.185785964s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr: (1.122766992s)
ha_test.go:616: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-767488-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:619: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-767488-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:622: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-767488-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:625: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr": ha-767488
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-767488-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-767488-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-767488 -n ha-767488
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 logs -n 25: (1.958134792s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m04 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp testdata/cp-test.txt                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488:/home/docker/cp-test_ha-767488-m04_ha-767488.txt                       |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488 sudo cat                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488.txt                                 |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m02:/home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03:/home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m03 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-767488 node stop m02 -v=7                                                     | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-767488 node start m02 -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488 -v=7                                                           | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-767488 -v=7                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	| node    | ha-767488 node delete m03 -v=7                                                   | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-767488 stop -v=7                                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true                                                         | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:40 UTC | 29 Jul 24 12:48 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	| node    | add -p ha-767488                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:48 UTC | 29 Jul 24 12:49 UTC |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:40:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:40:51.329866  260472 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:40:51.329974  260472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:40:51.329984  260472 out.go:304] Setting ErrFile to fd 2...
	I0729 12:40:51.329990  260472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:40:51.330183  260472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:40:51.330779  260472 out.go:298] Setting JSON to false
	I0729 12:40:51.331755  260472 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8594,"bootTime":1722248257,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:40:51.331823  260472 start.go:139] virtualization: kvm guest
	I0729 12:40:51.334313  260472 out.go:177] * [ha-767488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:40:51.335770  260472 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:40:51.335784  260472 notify.go:220] Checking for updates...
	I0729 12:40:51.338199  260472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:40:51.339561  260472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:40:51.340932  260472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:40:51.342165  260472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:40:51.343840  260472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:40:51.345700  260472 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:40:51.346109  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.346170  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.362742  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I0729 12:40:51.363165  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.363711  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.363735  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.364108  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.364327  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.364586  260472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:40:51.365000  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.365043  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.379978  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42067
	I0729 12:40:51.380389  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.380778  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.380814  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.381158  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.381323  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.415931  260472 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:40:51.417174  260472 start.go:297] selected driver: kvm2
	I0729 12:40:51.417189  260472 start.go:901] validating driver "kvm2" against &{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:40:51.417335  260472 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:40:51.417664  260472 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:40:51.417770  260472 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:40:51.432545  260472 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:40:51.433500  260472 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:40:51.433539  260472 cni.go:84] Creating CNI manager for ""
	I0729 12:40:51.433548  260472 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:40:51.433631  260472 start.go:340] cluster config:
	{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:40:51.433831  260472 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:40:51.435545  260472 out.go:177] * Starting "ha-767488" primary control-plane node in "ha-767488" cluster
	I0729 12:40:51.436699  260472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:40:51.436735  260472 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:40:51.436747  260472 cache.go:56] Caching tarball of preloaded images
	I0729 12:40:51.436866  260472 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:40:51.436877  260472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:40:51.437012  260472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:40:51.437194  260472 start.go:360] acquireMachinesLock for ha-767488: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:40:51.437233  260472 start.go:364] duration metric: took 21.45µs to acquireMachinesLock for "ha-767488"
	I0729 12:40:51.437247  260472 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:40:51.437253  260472 fix.go:54] fixHost starting: 
	I0729 12:40:51.437521  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.437552  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.451341  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0729 12:40:51.451741  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.452191  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.452220  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.452535  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.452723  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.452885  260472 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:40:51.454319  260472 fix.go:112] recreateIfNeeded on ha-767488: state=Running err=<nil>
	W0729 12:40:51.454350  260472 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:40:51.456154  260472 out.go:177] * Updating the running kvm2 "ha-767488" VM ...
	I0729 12:40:51.457351  260472 machine.go:94] provisionDockerMachine start ...
	I0729 12:40:51.457369  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.457584  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.459878  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.460266  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.460296  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.460395  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.460553  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.460704  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.460782  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.460935  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.461114  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.461124  260472 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:40:51.569205  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:40:51.569235  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.569499  260472 buildroot.go:166] provisioning hostname "ha-767488"
	I0729 12:40:51.569524  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.569693  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.572499  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.572988  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.573033  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.573160  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.573358  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.573548  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.573648  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.573898  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.574069  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.574089  260472 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767488 && echo "ha-767488" | sudo tee /etc/hostname
	I0729 12:40:51.701204  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:40:51.701229  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.703986  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.704423  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.704461  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.704639  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.704824  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.704975  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.705089  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.705288  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.705507  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.705531  260472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767488/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:40:51.817644  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:40:51.817684  260472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:40:51.817700  260472 buildroot.go:174] setting up certificates
	I0729 12:40:51.817709  260472 provision.go:84] configureAuth start
	I0729 12:40:51.817719  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.818054  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:40:51.820835  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.821225  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.821246  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.821413  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.823391  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.823759  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.823788  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.823928  260472 provision.go:143] copyHostCerts
	I0729 12:40:51.823969  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:40:51.824015  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 12:40:51.824028  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:40:51.824106  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:40:51.824213  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:40:51.824238  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 12:40:51.824248  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:40:51.824287  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:40:51.824345  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:40:51.824376  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 12:40:51.824384  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:40:51.824417  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:40:51.824477  260472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.ha-767488 san=[127.0.0.1 192.168.39.217 ha-767488 localhost minikube]
	I0729 12:40:52.006332  260472 provision.go:177] copyRemoteCerts
	I0729 12:40:52.006418  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:40:52.006452  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:52.009130  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.009520  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:52.009546  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.009704  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:52.009964  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.010156  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:52.010326  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:40:52.094644  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:40:52.094738  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:40:52.119444  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:40:52.119509  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 12:40:52.143660  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:40:52.143716  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:40:52.167324  260472 provision.go:87] duration metric: took 349.60091ms to configureAuth
	I0729 12:40:52.167355  260472 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:40:52.167557  260472 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:40:52.167627  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:52.170399  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.170750  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:52.170769  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.170976  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:52.171205  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.171383  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.171515  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:52.171707  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:52.171890  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:52.171904  260472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:42:30.662176  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:42:30.662209  260472 machine.go:97] duration metric: took 1m39.204842674s to provisionDockerMachine
	I0729 12:42:30.662225  260472 start.go:293] postStartSetup for "ha-767488" (driver="kvm2")
	I0729 12:42:30.662240  260472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:42:30.662263  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.662582  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:42:30.662612  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.665494  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.666063  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.666088  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.666235  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.666474  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.666633  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.666847  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:30.752735  260472 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:42:30.757792  260472 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:42:30.757820  260472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:42:30.757900  260472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:42:30.757994  260472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 12:42:30.758009  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 12:42:30.758096  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:42:30.768113  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:42:30.793284  260472 start.go:296] duration metric: took 131.040886ms for postStartSetup
	I0729 12:42:30.793328  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.793694  260472 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 12:42:30.793729  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.796515  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.796959  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.796985  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.797155  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.797360  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.797508  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.797632  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	W0729 12:42:30.883560  260472 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 12:42:30.883593  260472 fix.go:56] duration metric: took 1m39.446338951s for fixHost
	I0729 12:42:30.883619  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.886076  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.886458  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.886483  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.886633  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.886829  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.886996  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.887140  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.887303  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:42:30.887526  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:42:30.887541  260472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:42:30.997876  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722256950.957387407
	
	I0729 12:42:30.997906  260472 fix.go:216] guest clock: 1722256950.957387407
	I0729 12:42:30.997917  260472 fix.go:229] Guest: 2024-07-29 12:42:30.957387407 +0000 UTC Remote: 2024-07-29 12:42:30.883602483 +0000 UTC m=+99.589379345 (delta=73.784924ms)
	I0729 12:42:30.997948  260472 fix.go:200] guest clock delta is within tolerance: 73.784924ms
	I0729 12:42:30.997986  260472 start.go:83] releasing machines lock for "ha-767488", held for 1m39.560717836s
	I0729 12:42:30.998041  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.998327  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:42:31.000905  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.001304  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.001335  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.001531  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002184  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002392  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002499  260472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:42:31.002576  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:31.002622  260472 ssh_runner.go:195] Run: cat /version.json
	I0729 12:42:31.002652  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:31.005308  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005500  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005704  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.005737  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005887  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:31.006092  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:31.006208  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.006233  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.006272  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:31.006395  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:31.006459  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:31.006551  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:31.006697  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:31.006864  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:31.115893  260472 ssh_runner.go:195] Run: systemctl --version
	I0729 12:42:31.122469  260472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:42:31.297345  260472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:42:31.304517  260472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:42:31.304592  260472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:42:31.316445  260472 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:42:31.316475  260472 start.go:495] detecting cgroup driver to use...
	I0729 12:42:31.316547  260472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:42:31.333639  260472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:42:31.349241  260472 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:42:31.349303  260472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:42:31.364204  260472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:42:31.378300  260472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:42:31.534355  260472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:42:31.684660  260472 docker.go:233] disabling docker service ...
	I0729 12:42:31.684748  260472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:42:31.700676  260472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:42:31.715730  260472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:42:31.862044  260472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:42:32.012656  260472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:42:32.026627  260472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:42:32.048998  260472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:42:32.049086  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.060466  260472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:42:32.060565  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.071761  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.082721  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.094732  260472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:42:32.106637  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.117985  260472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.131937  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.142195  260472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:42:32.151406  260472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:42:32.160525  260472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:42:32.305601  260472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:42:40.307724  260472 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.002069181s)
	I0729 12:42:40.307768  260472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:42:40.307825  260472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:42:40.312866  260472 start.go:563] Will wait 60s for crictl version
	I0729 12:42:40.312915  260472 ssh_runner.go:195] Run: which crictl
	I0729 12:42:40.316658  260472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:42:40.356691  260472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:42:40.356775  260472 ssh_runner.go:195] Run: crio --version
	I0729 12:42:40.385190  260472 ssh_runner.go:195] Run: crio --version
	I0729 12:42:40.417948  260472 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:42:40.419401  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:42:40.422540  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:40.422892  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:40.422937  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:40.423110  260472 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:42:40.427910  260472 kubeadm.go:883] updating cluster {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:42:40.428052  260472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:42:40.428107  260472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:42:40.473605  260472 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:42:40.473627  260472 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:42:40.473677  260472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:42:40.600040  260472 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:42:40.600073  260472 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:42:40.600100  260472 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0729 12:42:40.600218  260472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-767488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:42:40.600301  260472 ssh_runner.go:195] Run: crio config
	I0729 12:42:40.713091  260472 cni.go:84] Creating CNI manager for ""
	I0729 12:42:40.713114  260472 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:42:40.713124  260472 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:42:40.713150  260472 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-767488 NodeName:ha-767488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:42:40.713297  260472 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-767488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:42:40.713315  260472 kube-vip.go:115] generating kube-vip config ...
	I0729 12:42:40.713354  260472 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 12:42:40.731149  260472 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 12:42:40.731283  260472 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 12:42:40.731354  260472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:42:40.745678  260472 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:42:40.745771  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 12:42:40.756067  260472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 12:42:40.779511  260472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:42:40.802104  260472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 12:42:40.819400  260472 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 12:42:40.835924  260472 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 12:42:40.840719  260472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:42:40.986870  260472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:42:41.001565  260472 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488 for IP: 192.168.39.217
	I0729 12:42:41.001593  260472 certs.go:194] generating shared ca certs ...
	I0729 12:42:41.001614  260472 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:42:41.001819  260472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:42:41.001875  260472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:42:41.001890  260472 certs.go:256] generating profile certs ...
	I0729 12:42:41.001972  260472 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/client.key
	I0729 12:42:41.002032  260472 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293
	I0729 12:42:41.002065  260472 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key
	I0729 12:42:41.002076  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:42:41.002091  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:42:41.002113  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:42:41.002131  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:42:41.002148  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:42:41.002165  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:42:41.002182  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:42:41.002198  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:42:41.002263  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 12:42:41.002296  260472 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 12:42:41.002305  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:42:41.002328  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:42:41.002348  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:42:41.002370  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:42:41.002406  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:42:41.002434  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.002446  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.002458  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.003070  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:42:41.027259  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:42:41.050547  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:42:41.074374  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:42:41.097416  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 12:42:41.120537  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:42:41.143944  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:42:41.166548  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:42:41.189375  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 12:42:41.212392  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 12:42:41.235698  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:42:41.258918  260472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:42:41.275147  260472 ssh_runner.go:195] Run: openssl version
	I0729 12:42:41.281163  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:42:41.291624  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.296196  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.296247  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.301759  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:42:41.310741  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 12:42:41.320986  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.325289  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.325343  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.331301  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 12:42:41.341279  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 12:42:41.351883  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.355957  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.356029  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.361571  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:42:41.370434  260472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:42:41.374797  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:42:41.380122  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:42:41.385653  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:42:41.391013  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:42:41.396652  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:42:41.402042  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:42:41.407437  260472 kubeadm.go:392] StartCluster: {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:42:41.407562  260472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:42:41.407600  260472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:42:41.448602  260472 cri.go:89] found id: "c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000"
	I0729 12:42:41.448629  260472 cri.go:89] found id: "6f541b63f34e8eeb46f9636fcd9f0442b732b33fe15a4bb1e996edfc3adf2fe8"
	I0729 12:42:41.448633  260472 cri.go:89] found id: "18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a"
	I0729 12:42:41.448637  260472 cri.go:89] found id: "66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b"
	I0729 12:42:41.448639  260472 cri.go:89] found id: "149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f"
	I0729 12:42:41.448643  260472 cri.go:89] found id: "7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85"
	I0729 12:42:41.448645  260472 cri.go:89] found id: "d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b"
	I0729 12:42:41.448647  260472 cri.go:89] found id: "88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770"
	I0729 12:42:41.448650  260472 cri.go:89] found id: "45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722"
	I0729 12:42:41.448655  260472 cri.go:89] found id: "76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00"
	I0729 12:42:41.448657  260472 cri.go:89] found id: "a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad"
	I0729 12:42:41.448660  260472 cri.go:89] found id: "5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887"
	I0729 12:42:41.448662  260472 cri.go:89] found id: "5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf"
	I0729 12:42:41.448665  260472 cri.go:89] found id: "c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d"
	I0729 12:42:41.448671  260472 cri.go:89] found id: "ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0"
	I0729 12:42:41.448673  260472 cri.go:89] found id: "e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316"
	I0729 12:42:41.448676  260472 cri.go:89] found id: "a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1"
	I0729 12:42:41.448680  260472 cri.go:89] found id: "f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb"
	I0729 12:42:41.448682  260472 cri.go:89] found id: "dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a"
	I0729 12:42:41.448685  260472 cri.go:89] found id: ""
	I0729 12:42:41.448727  260472 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.114019594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257398113996776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51c7cdd8-b20f-4b82-b9e0-dfdd901d9657 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.114607692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a6a4f18-6eef-4893-9b62-3d122f71c2c6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.114685418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a6a4f18-6eef-4893-9b62-3d122f71c2c6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.115186203Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a6a4f18-6eef-4893-9b62-3d122f71c2c6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.166091372Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=916a77af-6feb-42db-810f-86b91f9e242a name=/runtime.v1.RuntimeService/Version
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.166222892Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=916a77af-6feb-42db-810f-86b91f9e242a name=/runtime.v1.RuntimeService/Version
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.167222094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e076132-cb55-48b9-9dfa-34f085517e49 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.167698961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257398167669589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e076132-cb55-48b9-9dfa-34f085517e49 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.168552179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fd83b63-87ed-486b-be79-e4d59cba7a8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.168622303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fd83b63-87ed-486b-be79-e4d59cba7a8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.169170321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fd83b63-87ed-486b-be79-e4d59cba7a8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.212150510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba637758-5fa5-4065-b62b-c3a127d27635 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.212225381Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba637758-5fa5-4065-b62b-c3a127d27635 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.213222165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8b86fa6-b14b-430e-a5ba-7bb45e57dc9c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.214115449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257398214088702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8b86fa6-b14b-430e-a5ba-7bb45e57dc9c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.214693094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce1fad99-a229-4bc6-a054-991f98a8dc0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.214782575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce1fad99-a229-4bc6-a054-991f98a8dc0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.215372744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce1fad99-a229-4bc6-a054-991f98a8dc0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.263572769Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1919be5c-c1c6-4192-a890-23b16d463e7f name=/runtime.v1.RuntimeService/Version
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.263700764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1919be5c-c1c6-4192-a890-23b16d463e7f name=/runtime.v1.RuntimeService/Version
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.264937787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd2e85d5-1d8a-43d3-b205-4884171d69c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.265400655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257398265375807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd2e85d5-1d8a-43d3-b205-4884171d69c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.265987695Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d515f10-8232-4235-810a-baf751cd5c49 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.266062036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d515f10-8232-4235-810a-baf751cd5c49 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:49:58 ha-767488 crio[6770]: time="2024-07-29 12:49:58.268520467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d515f10-8232-4235-810a-baf751cd5c49 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f525ef9d81722       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   3 minutes ago       Running             kube-controller-manager   9                   309c197fc5d30       kube-controller-manager-ha-767488
	e93850281207e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   4 minutes ago       Running             kube-apiserver            4                   8ed4d1a8b9e49       kube-apiserver-ha-767488
	2276a6710daab       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   4 minutes ago       Exited              kube-controller-manager   8                   309c197fc5d30       kube-controller-manager-ha-767488
	cf19aeac69879       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago       Running             storage-provisioner       5                   69a46f7b3f55b       storage-provisioner
	29fa3f76ed7cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Exited              storage-provisioner       4                   69a46f7b3f55b       storage-provisioner
	25b0c852c73cd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   2                   10b4f76c89c4a       busybox-fc5497c4f-trgfp
	8e5267677fe3d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   2                   a2229e9d163bb       busybox-fc5497c4f-4ppv4
	76489ee06a477       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   7 minutes ago       Exited              kube-apiserver            3                   8ed4d1a8b9e49       kube-apiserver-ha-767488
	9d1de005960b4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   7 minutes ago       Running             coredns                   2                   00984bd8001fe       coredns-7db6d8ff4d-qqt5t
	b384618e5ae14       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   7 minutes ago       Running             kube-vip                  2                   2c2519cb3cb91       kube-vip-ha-767488
	cc3c94fe6246a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   7 minutes ago       Running             kube-proxy                2                   33794e3552983       kube-proxy-sqk96
	2d5168de1ca60       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   7 minutes ago       Running             coredns                   2                   b2a043288f89f       coredns-7db6d8ff4d-k6r5l
	aa1dfc42a005d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   7 minutes ago       Running             kube-scheduler            2                   6be19c0f23e95       kube-scheduler-ha-767488
	b50ae6e8e38f6       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46   7 minutes ago       Running             kindnet-cni               2                   aab24bd3a9edf       kindnet-6x56p
	a311cff0c8ecc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 minutes ago       Running             etcd                      2                   b2a875cf8cfc1       etcd-ha-767488
	18d7603603557       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   11 minutes ago      Exited              kube-vip                  1                   4ac1d50b066bb       kube-vip-ha-767488
	3f1e978a01d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   19 minutes ago      Exited              busybox                   1                   6ff1b7f6ad731       busybox-fc5497c4f-4ppv4
	cbbea78e99e72       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   19 minutes ago      Exited              busybox                   1                   a7dc5254878c7       busybox-fc5497c4f-trgfp
	d899a73918641       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 minutes ago      Exited              coredns                   1                   464e80f1474da       coredns-7db6d8ff4d-k6r5l
	88ec5aa0ed7ec       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 minutes ago      Exited              kube-proxy                1                   4e921577c4923       kube-proxy-sqk96
	45379775c471b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 minutes ago      Exited              coredns                   1                   6fd6fea36e81f       coredns-7db6d8ff4d-qqt5t
	a327747c60c54       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46   19 minutes ago      Exited              kindnet-cni               1                   ebff2bebd5529       kindnet-6x56p
	5e886bb5a4a2e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   19 minutes ago      Exited              kube-scheduler            1                   4d030101f0f82       kube-scheduler-ha-767488
	5c8cded716df9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   19 minutes ago      Exited              etcd                      1                   c38a2d43be153       etcd-ha-767488
	
	
	==> coredns [2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[283503875]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:42:53.754) (total time: 10001ms):
	Trace[283503875]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:43:03.755)
	Trace[283503875]: [10.001379228s] [10.001379228s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[841416442]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.890) (total time: 11819ms):
	Trace[841416442]: ---"Objects listed" error:Unauthorized 11819ms (12:40:27.709)
	Trace[841416442]: [11.819152896s] [11.819152896s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2022085669]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.047) (total time: 12661ms):
	Trace[2022085669]: ---"Objects listed" error:Unauthorized 12661ms (12:40:27.709)
	Trace[2022085669]: [12.66151731s] [12.66151731s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1130676405]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:32.086) (total time: 10721ms):
	Trace[1130676405]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 10720ms (12:40:42.807)
	Trace[1130676405]: [10.721021558s] [10.721021558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b] <==
	Trace[394769481]: [10.001110422s] [10.001110422s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53396->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53396->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53384->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53384->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53380->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53380->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1030282606]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.356) (total time: 12346ms):
	Trace[1030282606]: ---"Objects listed" error:Unauthorized 12346ms (12:40:27.702)
	Trace[1030282606]: [12.346347085s] [12.346347085s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[418228940]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.563) (total time: 12139ms):
	Trace[418228940]: ---"Objects listed" error:Unauthorized 12138ms (12:40:27.702)
	Trace[418228940]: [12.139191986s] [12.139191986s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[2011977158]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:31.350) (total time: 11455ms):
	Trace[2011977158]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 11455ms (12:40:42.805)
	Trace[2011977158]: [11.45543795s] [11.45543795s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3048": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3048": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[856661345]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:31.528) (total time: 11278ms):
	Trace[856661345]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 11278ms (12:40:42.807)
	Trace[856661345]: [11.278535864s] [11.278535864s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-767488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:49:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:48:41 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:48:41 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:48:41 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:48:41 +0000   Mon, 29 Jul 2024 12:21:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-767488
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4910accb98434efca56ff8b39068800c
	  System UUID:                4910accb-9843-4efc-a56f-f8b39068800c
	  Boot ID:                    f538ab8c-89b7-40ce-b82e-7644a867ee15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4ppv4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  default                     busybox-fc5497c4f-trgfp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 coredns-7db6d8ff4d-k6r5l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 coredns-7db6d8ff4d-qqt5t             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-ha-767488                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-6x56p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-ha-767488             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-767488    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-sqk96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-ha-767488             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-767488                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 6m32s                kube-proxy       
	  Normal   Starting                 18m                  kube-proxy       
	  Normal   Starting                 28m                  kube-proxy       
	  Normal   Starting                 29m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  29m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     29m (x7 over 29m)    kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    29m (x8 over 29m)    kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  29m (x8 over 29m)    kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     28m                  kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   Starting                 28m                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    28m                  kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  28m                  kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  28m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           28m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   NodeReady                28m                  kubelet          Node ha-767488 status is now: NodeReady
	  Normal   RegisteredNode           27m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           25m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           23m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Warning  ContainerGCFailed        7m52s (x4 over 20m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           6m21s                node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           3m1s                 node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           117s                 node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           15s                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	
	
	Name:               ha-767488-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_22_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:22:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:49:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:49:19 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:49:19 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:49:19 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:49:19 +0000   Mon, 29 Jul 2024 12:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-767488-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9a3fe2d6456464f8574d4c1d95e4f21
	  System UUID:                d9a3fe2d-6456-464f-8574-d4c1d95e4f21
	  Boot ID:                    5cef9760-b094-4a5a-943c-bf1eb8a249d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jjx77                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 etcd-ha-767488-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-l7jpd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-767488-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-767488-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-d9lg8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-767488-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-767488-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m4s                   kube-proxy       
	  Normal   Starting                 27m                    kube-proxy       
	  Normal   Starting                 23m                    kube-proxy       
	  Normal   Starting                 18m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           27m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           27m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           25m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Warning  Rebooted                 24m                    kubelet          Node ha-767488-m02 has been rebooted, boot id: 9ab58707-555a-4bb6-83c9-2399f8c434d4
	  Normal   NodeHasSufficientPID     24m                    kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 24m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  24m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  24m                    kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24m                    kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           23m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Warning  ContainerGCFailed        19m                    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           17m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   Starting                 6m53s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m53s (x8 over 6m53s)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m53s (x8 over 6m53s)  kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m53s (x7 over 6m53s)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m21s                  node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           3m1s                   node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           117s                   node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           15s                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	
	
	Name:               ha-767488-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_23_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:23:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:49:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-767488-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ca168b2de41451a82ff59b787c535ad
	  System UUID:                5ca168b2-de41-451a-82ff-59b787c535ad
	  Boot ID:                    f8572197-e522-4d4c-92d1-3c0e30179060
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-767488-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kindnet-bz9pp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-767488-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-ha-767488-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-tzj27                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-767488-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-vip-ha-767488-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m12s                  kube-proxy       
	  Normal   Starting                 26m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    26m (x8 over 26m)      kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           26m                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     26m (x7 over 26m)      kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  26m (x8 over 26m)      kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           26m                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           25m                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           23m                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   NodeNotReady             22m                    node-controller  Node ha-767488-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           17m                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           6m21s                  node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           3m1s                   node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m30s (x2 over 2m31s)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x2 over 2m31s)  kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x2 over 2m31s)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m30s                  kubelet          Node ha-767488-m03 has been rebooted, boot id: f8572197-e522-4d4c-92d1-3c0e30179060
	  Normal   NodeReady                2m30s                  kubelet          Node ha-767488-m03 status is now: NodeReady
	  Normal   RegisteredNode           117s                   node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           15s                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	
	
	Name:               ha-767488-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_24_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:24:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:49:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:48:50 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:48:50 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:48:50 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:48:50 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-767488-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 326a5fb51aae42b7b8056fc3c9e53faf
	  System UUID:                326a5fb5-1aae-42b7-b805-6fc3c9e53faf
	  Boot ID:                    5525fcaf-d53c-41e4-a857-9519defa86cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bgb2n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-proxy-2m5gr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25m                kube-proxy       
	  Normal   Starting                 95s                kube-proxy       
	  Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     25m (x2 over 25m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    25m (x2 over 25m)  kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  25m (x2 over 25m)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           25m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   NodeReady                24m                kubelet          Node ha-767488-m04 status is now: NodeReady
	  Normal   RegisteredNode           23m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   NodeNotReady             22m                node-controller  Node ha-767488-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           17m                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           6m21s              node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           3m1s               node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           117s               node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   Starting                 99s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  98s (x2 over 98s)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s (x2 over 98s)  kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s (x2 over 98s)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 98s                kubelet          Node ha-767488-m04 has been rebooted, boot id: 5525fcaf-d53c-41e4-a857-9519defa86cc
	  Normal   NodeReady                98s                kubelet          Node ha-767488-m04 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	
	
	Name:               ha-767488-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_49_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:49:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m05
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:49:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:49:54 +0000   Mon, 29 Jul 2024 12:49:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:49:54 +0000   Mon, 29 Jul 2024 12:49:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:49:54 +0000   Mon, 29 Jul 2024 12:49:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:49:54 +0000   Mon, 29 Jul 2024 12:49:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-767488-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 07369ebad7c74bdca88f86988699fb71
	  System UUID:                07369eba-d7c7-4bdc-a88f-86988699fb71
	  Boot ID:                    e2fc7bfa-d1ad-4431-89cf-7afed400131a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-767488-m05                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         33s
	  kube-system                 kindnet-6kzhd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      35s
	  kube-system                 kube-apiserver-ha-767488-m05             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-controller-manager-ha-767488-m05    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-proxy-dhfx4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-scheduler-ha-767488-m05             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-vip-ha-767488-m05                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node ha-767488-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node ha-767488-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 35s)  kubelet          Node ha-767488-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           32s                node-controller  Node ha-767488-m05 event: Registered Node ha-767488-m05 in Controller
	  Normal  RegisteredNode           31s                node-controller  Node ha-767488-m05 event: Registered Node ha-767488-m05 in Controller
	  Normal  RegisteredNode           31s                node-controller  Node ha-767488-m05 event: Registered Node ha-767488-m05 in Controller
	  Normal  RegisteredNode           15s                node-controller  Node ha-767488-m05 event: Registered Node ha-767488-m05 in Controller
	
	
	==> dmesg <==
	[ +10.417395] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.584590] kauditd_printk_skb: 34 callbacks suppressed
	[Jul29 12:22] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 12:25] kauditd_printk_skb: 10 callbacks suppressed
	[Jul29 12:30] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +0.152481] systemd-fstab-generator[3296]: Ignoring "noauto" option for root device
	[  +0.201233] systemd-fstab-generator[3310]: Ignoring "noauto" option for root device
	[  +0.141805] systemd-fstab-generator[3322]: Ignoring "noauto" option for root device
	[  +0.319718] systemd-fstab-generator[3350]: Ignoring "noauto" option for root device
	[  +4.887179] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.088800] kauditd_printk_skb: 100 callbacks suppressed
	[  +9.359831] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.040803] kauditd_printk_skb: 30 callbacks suppressed
	[ +16.902160] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.806405] kauditd_printk_skb: 5 callbacks suppressed
	[Jul29 12:42] systemd-fstab-generator[6690]: Ignoring "noauto" option for root device
	[  +0.156941] systemd-fstab-generator[6701]: Ignoring "noauto" option for root device
	[  +0.179346] systemd-fstab-generator[6715]: Ignoring "noauto" option for root device
	[  +0.150490] systemd-fstab-generator[6727]: Ignoring "noauto" option for root device
	[  +0.290091] systemd-fstab-generator[6755]: Ignoring "noauto" option for root device
	[  +8.442471] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.232725] systemd-fstab-generator[6987]: Ignoring "noauto" option for root device
	[  +4.882020] kauditd_printk_skb: 101 callbacks suppressed
	[Jul29 12:43] kauditd_printk_skb: 11 callbacks suppressed
	[ +20.903422] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf] <==
	{"level":"info","ts":"2024-07-29T12:40:51.117051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd [logterm: 7, index: 3480] sent MsgPreVote request to d9000071a51f92ea at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:52.278962Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T12:40:52.279017Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	{"level":"warn","ts":"2024-07-29T12:40:52.279122Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.279145Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.290948Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.291007Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T12:40:52.291066Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a09c9983ac28f1fd","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T12:40:52.291289Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291329Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291362Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291459Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291525Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291589Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291602Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291608Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.29172Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291751Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291782Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291865Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.304523Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:40:52.30477Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:40:52.304858Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> etcd [a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828] <==
	{"level":"warn","ts":"2024-07-29T12:47:44.965528Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.230668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-767488-m03\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-29T12:47:44.966066Z","caller":"traceutil/trace.go:171","msg":"trace[453902694] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-767488-m03; range_end:; response_count:1; response_revision:3782; }","duration":"106.799466ms","start":"2024-07-29T12:47:44.859168Z","end":"2024-07-29T12:47:44.965968Z","steps":["trace[453902694] 'range keys from in-memory index tree'  (duration: 103.837804ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:47:50.971028Z","caller":"traceutil/trace.go:171","msg":"trace[1827493490] linearizableReadLoop","detail":"{readStateIndex:4330; appliedIndex:4330; }","duration":"111.574421ms","start":"2024-07-29T12:47:50.859424Z","end":"2024-07-29T12:47:50.970999Z","steps":["trace[1827493490] 'read index received'  (duration: 111.56917ms)","trace[1827493490] 'applied index is now lower than readState.Index'  (duration: 3.955µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:47:50.971255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.808091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-767488-m03\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-29T12:47:50.971313Z","caller":"traceutil/trace.go:171","msg":"trace[1184293185] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-767488-m03; range_end:; response_count:1; response_revision:3807; }","duration":"111.898697ms","start":"2024-07-29T12:47:50.8594Z","end":"2024-07-29T12:47:50.971299Z","steps":["trace[1184293185] 'agreement among raft nodes before linearized reading'  (duration: 111.702182ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:49:24.308023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(3528410088117503397 11573293933243462141 15636498394331976426) learners=(11036766341275668187)"}
	{"level":"info","ts":"2024-07-29T12:49:24.308421Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","added-peer-id":"992a788318b94edb","added-peer-peer-urls":["https://192.168.39.48:2380"]}
	{"level":"info","ts":"2024-07-29T12:49:24.308497Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.308543Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.310618Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.310715Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.310741Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.310759Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.310862Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.31094Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb","remote-peer-urls":["https://192.168.39.48:2380"]}
	{"level":"info","ts":"2024-07-29T12:49:24.312889Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"a09c9983ac28f1fd","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:25.668481Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:25.668552Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:25.711343Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"992a788318b94edb","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T12:49:25.711402Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:25.721013Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:25.744542Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"992a788318b94edb","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T12:49:25.744609Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:26.936934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(3528410088117503397 11036766341275668187 11573293933243462141 15636498394331976426)"}
	{"level":"info","ts":"2024-07-29T12:49:26.937198Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd"}
	
	
	==> kernel <==
	 12:49:59 up 29 min,  0 users,  load average: 0.39, 0.35, 0.31
	Linux ha-767488 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad] <==
	I0729 12:40:29.352753       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:29.352910       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:29.352935       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:40:29.353062       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:29.353084       1 main.go:299] handling current node
	I0729 12:40:39.356408       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:39.356467       1 main.go:299] handling current node
	I0729 12:40:39.356485       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:40:39.356493       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:40:39.356693       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:40:39.356728       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:39.356862       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:39.356895       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	W0729 12:40:42.805541       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	E0729 12:40:42.805601       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	I0729 12:40:49.352084       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:49.352122       1 main.go:299] handling current node
	I0729 12:40:49.352136       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:40:49.352140       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:40:49.352268       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:40:49.352274       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:49.352317       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:49.352321       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	W0729 12:40:50.573724       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	E0729 12:40:50.573775       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> kindnet [b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577] <==
	I0729 12:49:35.900328       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:49:35.900452       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:49:35.900480       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:49:35.900569       1 main.go:295] Handling node with IPs: map[192.168.39.48:{}]
	I0729 12:49:35.900593       1 main.go:322] Node ha-767488-m05 has CIDR [10.244.4.0/24] 
	I0729 12:49:45.899508       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:49:45.899688       1 main.go:299] handling current node
	I0729 12:49:45.899740       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:49:45.899763       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:49:45.900068       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:49:45.900124       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:49:45.900216       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:49:45.900241       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:49:45.900356       1 main.go:295] Handling node with IPs: map[192.168.39.48:{}]
	I0729 12:49:45.900424       1 main.go:322] Node ha-767488-m05 has CIDR [10.244.4.0/24] 
	I0729 12:49:55.899518       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:49:55.899661       1 main.go:299] handling current node
	I0729 12:49:55.899691       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:49:55.899723       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:49:55.899949       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:49:55.899997       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:49:55.900081       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:49:55.900102       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:49:55.900168       1 main.go:295] Handling node with IPs: map[192.168.39.48:{}]
	I0729 12:49:55.900187       1 main.go:322] Node ha-767488-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf] <==
	I0729 12:42:49.430000       1 options.go:221] external host was not specified, using 192.168.39.217
	I0729 12:42:49.431238       1 server.go:148] Version: v1.30.3
	I0729 12:42:49.431325       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:42:49.866306       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 12:42:49.878045       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:42:49.881570       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 12:42:49.881667       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 12:42:49.881920       1 instance.go:299] Using reconciler: lease
	W0729 12:43:09.865456       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 12:43:09.865695       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 12:43:09.882923       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15] <==
	I0729 12:45:59.628077       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 12:45:59.628961       1 aggregator.go:163] waiting for initial CRD sync...
	I0729 12:45:59.629024       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0729 12:45:59.629373       1 available_controller.go:423] Starting AvailableConditionController
	I0729 12:45:59.638565       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0729 12:45:59.676095       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 12:45:59.693227       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:45:59.693266       1 policy_source.go:224] refreshing policies
	I0729 12:45:59.696534       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 12:45:59.726284       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 12:45:59.731995       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 12:45:59.732057       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 12:45:59.732134       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 12:45:59.732180       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 12:45:59.732200       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 12:45:59.732874       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 12:45:59.733570       1 aggregator.go:165] initial CRD sync complete...
	I0729 12:45:59.733620       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 12:45:59.733627       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 12:45:59.733632       1 cache.go:39] Caches are synced for autoregister controller
	I0729 12:45:59.738603       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 12:46:00.638385       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 12:46:01.060195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.45]
	I0729 12:46:01.061970       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 12:46:01.070860       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b] <==
	I0729 12:45:07.191616       1 serving.go:380] Generated self-signed cert in-memory
	I0729 12:45:07.718594       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 12:45:07.718688       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:45:07.721330       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 12:45:07.723008       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:45:07.723288       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 12:45:07.723400       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 12:45:17.724994       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.217:8443/healthz\": dial tcp 192.168.39.217:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0] <==
	I0729 12:46:57.302912       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m03"
	I0729 12:46:57.303103       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 12:46:57.306059       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 12:46:57.316545       1 shared_informer.go:320] Caches are synced for disruption
	I0729 12:46:57.355364       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 12:46:57.366896       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 12:46:57.410323       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 12:46:57.414574       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:46:57.464419       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:46:57.471777       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 12:46:57.890551       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:46:57.958671       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:46:57.958714       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 12:47:28.903601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.335µs"
	I0729 12:47:29.046385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.149µs"
	I0729 12:47:29.064238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.904µs"
	I0729 12:47:29.065729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.519µs"
	I0729 12:48:20.069830       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-767488-m04"
	E0729 12:49:23.798967       1 certificate_controller.go:146] Sync csr-bsr6r failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-bsr6r": the object has been modified; please apply your changes to the latest version and try again
	E0729 12:49:23.803140       1 certificate_controller.go:146] Sync csr-bsr6r failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-bsr6r": the object has been modified; please apply your changes to the latest version and try again
	I0729 12:49:23.879066       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-767488-m05\" does not exist"
	I0729 12:49:23.880933       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-767488-m04"
	I0729 12:49:23.909669       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-767488-m05" podCIDRs=["10.244.4.0/24"]
	I0729 12:49:27.365522       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m05"
	I0729 12:49:47.668674       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-767488-m04"
	
	
	==> kube-proxy [88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770] <==
	E0729 12:38:58.686582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.830714       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.830972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.831090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.831124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.831182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.831211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:14.047354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:14.048013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:14.047870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:14.048111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:17.119592       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:17.119666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:35.551618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:35.551700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:35.551992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:35.552162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:41.695764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:41.696014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:06.272423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:06.272868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:21.631308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:21.631558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:24.703209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:24.703408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0] <==
	I0729 12:43:26.005579       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:43:26.005921       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:43:26.005961       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:43:26.008082       1 config.go:192] "Starting service config controller"
	I0729 12:43:26.008129       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:43:26.008154       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:43:26.008158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:43:26.009228       1 config.go:319] "Starting node config controller"
	I0729 12:43:26.009261       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0729 12:43:29.022916       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:43:29.023565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.023756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:29.023657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:29.023892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.023968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.024040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.094724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.094895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094978       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.095150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:43:33.908457       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:43:34.210378       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:43:34.908288       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887] <==
	W0729 12:40:22.243325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 12:40:22.243419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 12:40:23.541590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 12:40:23.541652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 12:40:24.030114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:24.030218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:24.827144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:24.827194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:25.963020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:40:25.963127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 12:40:27.525553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:40:27.525717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:40:31.457216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:40:31.457249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:40:31.946204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:31.946255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:31.987696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:40:31.987742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:40:32.539286       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:40:32.539318       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:40:33.993576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:40:33.993629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:40:34.509160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:40:34.509295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:40:52.283637       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba] <==
	W0729 12:45:39.021894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:39.022019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:40.035456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.217:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:40.035546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.217:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:41.325471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:41.325533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:53.830647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:53.830888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:54.268363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:54.268503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:54.603189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:54.603346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	I0729 12:46:02.189750       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 12:49:24.020347       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-s8hcn\": pod kube-proxy-s8hcn is already assigned to node \"ha-767488-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-s8hcn" node="ha-767488-m05"
	E0729 12:49:24.020932       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cfd78765-9085-4263-b6eb-42118268bc39(kube-system/kube-proxy-s8hcn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-s8hcn"
	E0729 12:49:24.021068       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-s8hcn\": pod kube-proxy-s8hcn is already assigned to node \"ha-767488-m05\"" pod="kube-system/kube-proxy-s8hcn"
	I0729 12:49:24.021283       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-s8hcn" node="ha-767488-m05"
	E0729 12:49:24.113163       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nxt4k\": pod kube-proxy-nxt4k is already assigned to node \"ha-767488-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nxt4k" node="ha-767488-m05"
	E0729 12:49:24.113465       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cee16aeb-6bb7-4a63-a560-ac46a6f443bb(kube-system/kube-proxy-nxt4k) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nxt4k"
	E0729 12:49:24.113782       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6kzhd\": pod kindnet-6kzhd is already assigned to node \"ha-767488-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-6kzhd" node="ha-767488-m05"
	E0729 12:49:24.115561       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0da6e183-04cd-4f86-b76b-af4382b3e9b8(kube-system/kindnet-6kzhd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6kzhd"
	E0729 12:49:24.115636       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6kzhd\": pod kindnet-6kzhd is already assigned to node \"ha-767488-m05\"" pod="kube-system/kindnet-6kzhd"
	I0729 12:49:24.115694       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6kzhd" node="ha-767488-m05"
	E0729 12:49:24.113990       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nxt4k\": pod kube-proxy-nxt4k is already assigned to node \"ha-767488-m05\"" pod="kube-system/kube-proxy-nxt4k"
	I0729 12:49:24.119237       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nxt4k" node="ha-767488-m05"
	
	
	==> kubelet <==
	Jul 29 12:46:06 ha-767488 kubelet[1381]: E0729 12:46:06.693101    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:46:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:46:19 ha-767488 kubelet[1381]: I0729 12:46:19.667641    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:46:19 ha-767488 kubelet[1381]: E0729 12:46:19.668493    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:46:32 ha-767488 kubelet[1381]: I0729 12:46:32.667323    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:46:32 ha-767488 kubelet[1381]: E0729 12:46:32.669010    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:46:45 ha-767488 kubelet[1381]: I0729 12:46:45.667906    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:47:06 ha-767488 kubelet[1381]: E0729 12:47:06.688205    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:47:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:48:06 ha-767488 kubelet[1381]: E0729 12:48:06.688745    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:48:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:49:06 ha-767488 kubelet[1381]: E0729 12:49:06.683298    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:49:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:49:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:49:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:49:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:49:57.800294  263504 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19341-233093/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-767488 -n ha-767488
helpers_test.go:261: (dbg) Run:  kubectl --context ha-767488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (85.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:304: expected profile "ha-767488" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-767488\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-767488\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\
":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-767488\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.217\",\"Port\":8443,\"KubernetesVersion\":\"
v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.45\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.210\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.181\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.39.48\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"i
ngress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608
000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-767488 -n ha-767488
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 logs -n 25: (1.97966583s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m04 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp testdata/cp-test.txt                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488:/home/docker/cp-test_ha-767488-m04_ha-767488.txt                       |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488 sudo cat                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488.txt                                 |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m02:/home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m02 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m03:/home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | ha-767488-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767488 ssh -n ha-767488-m03 sudo cat                                          | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | /home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-767488 node stop m02 -v=7                                                     | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-767488 node start m02 -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488 -v=7                                                           | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-767488 -v=7                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true -v=7                                                    | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767488                                                                | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	| node    | ha-767488 node delete m03 -v=7                                                   | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-767488 stop -v=7                                                              | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767488 --wait=true                                                         | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:40 UTC | 29 Jul 24 12:48 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	| node    | add -p ha-767488                                                                 | ha-767488 | jenkins | v1.33.1 | 29 Jul 24 12:48 UTC | 29 Jul 24 12:49 UTC |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:40:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:40:51.329866  260472 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:40:51.329974  260472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:40:51.329984  260472 out.go:304] Setting ErrFile to fd 2...
	I0729 12:40:51.329990  260472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:40:51.330183  260472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:40:51.330779  260472 out.go:298] Setting JSON to false
	I0729 12:40:51.331755  260472 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8594,"bootTime":1722248257,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:40:51.331823  260472 start.go:139] virtualization: kvm guest
	I0729 12:40:51.334313  260472 out.go:177] * [ha-767488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:40:51.335770  260472 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:40:51.335784  260472 notify.go:220] Checking for updates...
	I0729 12:40:51.338199  260472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:40:51.339561  260472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:40:51.340932  260472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:40:51.342165  260472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:40:51.343840  260472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:40:51.345700  260472 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:40:51.346109  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.346170  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.362742  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I0729 12:40:51.363165  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.363711  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.363735  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.364108  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.364327  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.364586  260472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:40:51.365000  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.365043  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.379978  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42067
	I0729 12:40:51.380389  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.380778  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.380814  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.381158  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.381323  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.415931  260472 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:40:51.417174  260472 start.go:297] selected driver: kvm2
	I0729 12:40:51.417189  260472 start.go:901] validating driver "kvm2" against &{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:40:51.417335  260472 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:40:51.417664  260472 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:40:51.417770  260472 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:40:51.432545  260472 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:40:51.433500  260472 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:40:51.433539  260472 cni.go:84] Creating CNI manager for ""
	I0729 12:40:51.433548  260472 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:40:51.433631  260472 start.go:340] cluster config:
	{Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:40:51.433831  260472 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:40:51.435545  260472 out.go:177] * Starting "ha-767488" primary control-plane node in "ha-767488" cluster
	I0729 12:40:51.436699  260472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:40:51.436735  260472 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:40:51.436747  260472 cache.go:56] Caching tarball of preloaded images
	I0729 12:40:51.436866  260472 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:40:51.436877  260472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:40:51.437012  260472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/config.json ...
	I0729 12:40:51.437194  260472 start.go:360] acquireMachinesLock for ha-767488: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:40:51.437233  260472 start.go:364] duration metric: took 21.45µs to acquireMachinesLock for "ha-767488"
	I0729 12:40:51.437247  260472 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:40:51.437253  260472 fix.go:54] fixHost starting: 
	I0729 12:40:51.437521  260472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:40:51.437552  260472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:40:51.451341  260472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0729 12:40:51.451741  260472 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:40:51.452191  260472 main.go:141] libmachine: Using API Version  1
	I0729 12:40:51.452220  260472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:40:51.452535  260472 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:40:51.452723  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.452885  260472 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:40:51.454319  260472 fix.go:112] recreateIfNeeded on ha-767488: state=Running err=<nil>
	W0729 12:40:51.454350  260472 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:40:51.456154  260472 out.go:177] * Updating the running kvm2 "ha-767488" VM ...
	I0729 12:40:51.457351  260472 machine.go:94] provisionDockerMachine start ...
	I0729 12:40:51.457369  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:40:51.457584  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.459878  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.460266  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.460296  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.460395  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.460553  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.460704  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.460782  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.460935  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.461114  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.461124  260472 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:40:51.569205  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:40:51.569235  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.569499  260472 buildroot.go:166] provisioning hostname "ha-767488"
	I0729 12:40:51.569524  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.569693  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.572499  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.572988  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.573033  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.573160  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.573358  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.573548  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.573648  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.573898  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.574069  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.574089  260472 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767488 && echo "ha-767488" | sudo tee /etc/hostname
	I0729 12:40:51.701204  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767488
	
	I0729 12:40:51.701229  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.703986  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.704423  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.704461  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.704639  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:51.704824  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.704975  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:51.705089  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:51.705288  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:51.705507  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:51.705531  260472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767488/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:40:51.817644  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:40:51.817684  260472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 12:40:51.817700  260472 buildroot.go:174] setting up certificates
	I0729 12:40:51.817709  260472 provision.go:84] configureAuth start
	I0729 12:40:51.817719  260472 main.go:141] libmachine: (ha-767488) Calling .GetMachineName
	I0729 12:40:51.818054  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:40:51.820835  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.821225  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.821246  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.821413  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:51.823391  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.823759  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:51.823788  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:51.823928  260472 provision.go:143] copyHostCerts
	I0729 12:40:51.823969  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:40:51.824015  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 12:40:51.824028  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 12:40:51.824106  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 12:40:51.824213  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:40:51.824238  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 12:40:51.824248  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 12:40:51.824287  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 12:40:51.824345  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:40:51.824376  260472 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 12:40:51.824384  260472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 12:40:51.824417  260472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 12:40:51.824477  260472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.ha-767488 san=[127.0.0.1 192.168.39.217 ha-767488 localhost minikube]
	I0729 12:40:52.006332  260472 provision.go:177] copyRemoteCerts
	I0729 12:40:52.006418  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:40:52.006452  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:52.009130  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.009520  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:52.009546  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.009704  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:52.009964  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.010156  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:52.010326  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:40:52.094644  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:40:52.094738  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 12:40:52.119444  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:40:52.119509  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 12:40:52.143660  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:40:52.143716  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:40:52.167324  260472 provision.go:87] duration metric: took 349.60091ms to configureAuth
	I0729 12:40:52.167355  260472 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:40:52.167557  260472 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:40:52.167627  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:40:52.170399  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.170750  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:40:52.170769  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:40:52.170976  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:40:52.171205  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.171383  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:40:52.171515  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:40:52.171707  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:40:52.171890  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:40:52.171904  260472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:42:30.662176  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:42:30.662209  260472 machine.go:97] duration metric: took 1m39.204842674s to provisionDockerMachine
	I0729 12:42:30.662225  260472 start.go:293] postStartSetup for "ha-767488" (driver="kvm2")
	I0729 12:42:30.662240  260472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:42:30.662263  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.662582  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:42:30.662612  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.665494  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.666063  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.666088  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.666235  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.666474  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.666633  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.666847  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:30.752735  260472 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:42:30.757792  260472 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:42:30.757820  260472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 12:42:30.757900  260472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 12:42:30.757994  260472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 12:42:30.758009  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 12:42:30.758096  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:42:30.768113  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:42:30.793284  260472 start.go:296] duration metric: took 131.040886ms for postStartSetup
	I0729 12:42:30.793328  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.793694  260472 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 12:42:30.793729  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.796515  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.796959  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.796985  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.797155  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.797360  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.797508  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.797632  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	W0729 12:42:30.883560  260472 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 12:42:30.883593  260472 fix.go:56] duration metric: took 1m39.446338951s for fixHost
	I0729 12:42:30.883619  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:30.886076  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.886458  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:30.886483  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:30.886633  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:30.886829  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.886996  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:30.887140  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:30.887303  260472 main.go:141] libmachine: Using SSH client type: native
	I0729 12:42:30.887526  260472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0729 12:42:30.887541  260472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:42:30.997876  260472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722256950.957387407
	
	I0729 12:42:30.997906  260472 fix.go:216] guest clock: 1722256950.957387407
	I0729 12:42:30.997917  260472 fix.go:229] Guest: 2024-07-29 12:42:30.957387407 +0000 UTC Remote: 2024-07-29 12:42:30.883602483 +0000 UTC m=+99.589379345 (delta=73.784924ms)
	I0729 12:42:30.997948  260472 fix.go:200] guest clock delta is within tolerance: 73.784924ms
	I0729 12:42:30.997986  260472 start.go:83] releasing machines lock for "ha-767488", held for 1m39.560717836s
	I0729 12:42:30.998041  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:30.998327  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:42:31.000905  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.001304  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.001335  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.001531  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002184  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002392  260472 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:42:31.002499  260472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:42:31.002576  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:31.002622  260472 ssh_runner.go:195] Run: cat /version.json
	I0729 12:42:31.002652  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:42:31.005308  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005500  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005704  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.005737  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.005887  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:31.006092  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:31.006208  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:31.006233  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:31.006272  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:31.006395  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:42:31.006459  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:31.006551  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:42:31.006697  260472 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:42:31.006864  260472 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:42:31.115893  260472 ssh_runner.go:195] Run: systemctl --version
	I0729 12:42:31.122469  260472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:42:31.297345  260472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:42:31.304517  260472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:42:31.304592  260472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:42:31.316445  260472 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:42:31.316475  260472 start.go:495] detecting cgroup driver to use...
	I0729 12:42:31.316547  260472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:42:31.333639  260472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:42:31.349241  260472 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:42:31.349303  260472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:42:31.364204  260472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:42:31.378300  260472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:42:31.534355  260472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:42:31.684660  260472 docker.go:233] disabling docker service ...
	I0729 12:42:31.684748  260472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:42:31.700676  260472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:42:31.715730  260472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:42:31.862044  260472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:42:32.012656  260472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:42:32.026627  260472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:42:32.048998  260472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:42:32.049086  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.060466  260472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:42:32.060565  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.071761  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.082721  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.094732  260472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:42:32.106637  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.117985  260472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.131937  260472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:42:32.142195  260472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:42:32.151406  260472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:42:32.160525  260472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:42:32.305601  260472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:42:40.307724  260472 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.002069181s)
	I0729 12:42:40.307768  260472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:42:40.307825  260472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:42:40.312866  260472 start.go:563] Will wait 60s for crictl version
	I0729 12:42:40.312915  260472 ssh_runner.go:195] Run: which crictl
	I0729 12:42:40.316658  260472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:42:40.356691  260472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:42:40.356775  260472 ssh_runner.go:195] Run: crio --version
	I0729 12:42:40.385190  260472 ssh_runner.go:195] Run: crio --version
	I0729 12:42:40.417948  260472 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:42:40.419401  260472 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:42:40.422540  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:40.422892  260472 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:42:40.422937  260472 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:42:40.423110  260472 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:42:40.427910  260472 kubeadm.go:883] updating cluster {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:42:40.428052  260472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:42:40.428107  260472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:42:40.473605  260472 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:42:40.473627  260472 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:42:40.473677  260472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:42:40.600040  260472 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:42:40.600073  260472 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:42:40.600100  260472 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0729 12:42:40.600218  260472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-767488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:42:40.600301  260472 ssh_runner.go:195] Run: crio config
	I0729 12:42:40.713091  260472 cni.go:84] Creating CNI manager for ""
	I0729 12:42:40.713114  260472 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 12:42:40.713124  260472 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:42:40.713150  260472 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-767488 NodeName:ha-767488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:42:40.713297  260472 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-767488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:42:40.713315  260472 kube-vip.go:115] generating kube-vip config ...
	I0729 12:42:40.713354  260472 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 12:42:40.731149  260472 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 12:42:40.731283  260472 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 12:42:40.731354  260472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:42:40.745678  260472 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:42:40.745771  260472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 12:42:40.756067  260472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 12:42:40.779511  260472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:42:40.802104  260472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 12:42:40.819400  260472 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 12:42:40.835924  260472 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 12:42:40.840719  260472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:42:40.986870  260472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:42:41.001565  260472 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488 for IP: 192.168.39.217
	I0729 12:42:41.001593  260472 certs.go:194] generating shared ca certs ...
	I0729 12:42:41.001614  260472 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:42:41.001819  260472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 12:42:41.001875  260472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 12:42:41.001890  260472 certs.go:256] generating profile certs ...
	I0729 12:42:41.001972  260472 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/client.key
	I0729 12:42:41.002032  260472 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key.310e5293
	I0729 12:42:41.002065  260472 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key
	I0729 12:42:41.002076  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:42:41.002091  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:42:41.002113  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:42:41.002131  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:42:41.002148  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:42:41.002165  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:42:41.002182  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:42:41.002198  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:42:41.002263  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 12:42:41.002296  260472 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 12:42:41.002305  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:42:41.002328  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 12:42:41.002348  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:42:41.002370  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 12:42:41.002406  260472 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 12:42:41.002434  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.002446  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.002458  260472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.003070  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:42:41.027259  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 12:42:41.050547  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:42:41.074374  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:42:41.097416  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 12:42:41.120537  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:42:41.143944  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:42:41.166548  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/ha-767488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:42:41.189375  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 12:42:41.212392  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 12:42:41.235698  260472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:42:41.258918  260472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:42:41.275147  260472 ssh_runner.go:195] Run: openssl version
	I0729 12:42:41.281163  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:42:41.291624  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.296196  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.296247  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:42:41.301759  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:42:41.310741  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 12:42:41.320986  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.325289  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.325343  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 12:42:41.331301  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 12:42:41.341279  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 12:42:41.351883  260472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.355957  260472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.356029  260472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 12:42:41.361571  260472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:42:41.370434  260472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:42:41.374797  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:42:41.380122  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:42:41.385653  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:42:41.391013  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:42:41.396652  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:42:41.402042  260472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:42:41.407437  260472 kubeadm.go:392] StartCluster: {Name:ha-767488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-767488 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:42:41.407562  260472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:42:41.407600  260472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:42:41.448602  260472 cri.go:89] found id: "c2ecec373e7fc9b2df14a7ee038d73c5c0f8ef3e75270e347eb200f6abfb5000"
	I0729 12:42:41.448629  260472 cri.go:89] found id: "6f541b63f34e8eeb46f9636fcd9f0442b732b33fe15a4bb1e996edfc3adf2fe8"
	I0729 12:42:41.448633  260472 cri.go:89] found id: "18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a"
	I0729 12:42:41.448637  260472 cri.go:89] found id: "66eeaa3de5dde1e9f2a918cb9fd9bc19adf3f3ed253d852c708c0ef05e28f69b"
	I0729 12:42:41.448639  260472 cri.go:89] found id: "149dfcffe55a708779d440706d95050121ad76560bbaa46641838c344b217e7f"
	I0729 12:42:41.448643  260472 cri.go:89] found id: "7ffae0e726786e23c9ba39593a6166909b333b1b7c99e601300f323d7ae2af85"
	I0729 12:42:41.448645  260472 cri.go:89] found id: "d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b"
	I0729 12:42:41.448647  260472 cri.go:89] found id: "88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770"
	I0729 12:42:41.448650  260472 cri.go:89] found id: "45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722"
	I0729 12:42:41.448655  260472 cri.go:89] found id: "76b855b3ad75bb209ec720b68db7bee9bfb69f8d1091919982820aab20439c00"
	I0729 12:42:41.448657  260472 cri.go:89] found id: "a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad"
	I0729 12:42:41.448660  260472 cri.go:89] found id: "5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887"
	I0729 12:42:41.448662  260472 cri.go:89] found id: "5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf"
	I0729 12:42:41.448665  260472 cri.go:89] found id: "c263b16acab21928710d48f8da4b2c33c4cf21fb27bd91c19afc83702ddea09d"
	I0729 12:42:41.448671  260472 cri.go:89] found id: "ed92faf8d1c93020229e184b95ee1a802427dc8ac7d3670cd6503553d63e1ea0"
	I0729 12:42:41.448673  260472 cri.go:89] found id: "e2114078a73c1867e40ef6bec0d5cdb7dae271d0467d72c5e1533e5f89ce4316"
	I0729 12:42:41.448676  260472 cri.go:89] found id: "a99c50ffbfb2857e177c7f353942753fa63496d40986552ea6d4dd738053f5b1"
	I0729 12:42:41.448680  260472 cri.go:89] found id: "f1ea8fbc1b3ff5827e936b241bd850fd33d0b66b1dc0fd42770eabd2b309b5bb"
	I0729 12:42:41.448682  260472 cri.go:89] found id: "dab08a0e0f3c131c614ed8ec288a8bf50cd6b85a4433f41884c95cadbd27473a"
	I0729 12:42:41.448685  260472 cri.go:89] found id: ""
	I0729 12:42:41.448727  260472 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.524840778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=004d92be-fb22-49c1-b82b-9a67a2a7ea07 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.526302829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=84a28904-c33f-43bd-be9b-c30ea9ce5760 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.526971898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257401526714751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84a28904-c33f-43bd-be9b-c30ea9ce5760 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.527447498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8974910e-ce66-42d0-812b-60746a815c74 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.527504441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8974910e-ce66-42d0-812b-60746a815c74 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.528018665Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8974910e-ce66-42d0-812b-60746a815c74 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.583108254Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96565bb5-6443-4fda-a6fe-0f2a3dbf1b66 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.583181598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96565bb5-6443-4fda-a6fe-0f2a3dbf1b66 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.584324075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd3151d5-a38e-4ad1-9e64-8ab918af2e28 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.584973674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257401584947207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd3151d5-a38e-4ad1-9e64-8ab918af2e28 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.585719286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5113003e-97c2-4f42-ac2e-962aa3528b06 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.585860296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5113003e-97c2-4f42-ac2e-962aa3528b06 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.587317602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5113003e-97c2-4f42-ac2e-962aa3528b06 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.599543968Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=115debcf-76bd-4ca1-8fd4-53e385f9b984 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.600124917Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-trgfp,Uid:7969b6b0-a51a-4242-9ecd-1c2f60c5904b,Namespace:default,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722257001958004198,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:24:12.543243347Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-4ppv4,Uid:be02c819-68d0-4158-94c8-f6211d18670a,Namespace:default,Attempt:2,},State:SANDB
OX_READY,CreatedAt:1722256998207167070,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:24:12.653234554Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-k6r5l,Uid:f3b8cd5e-1836-4118-847f-888cd3aa6dd7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722256964138057613,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:21
:32.992506530Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&PodSandboxMetadata{Name:kube-proxy-sqk96,Uid:0730198e-117f-40fe-8b70-8a8364975298,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722256964101724392,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:21:16.353121323Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&PodSandboxMetadata{Name:kindnet-6x56p,Uid:71b2a0c2-a003-42d6-b606-54d777bc10ee,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722256964098383958,Labels:map[string]s
tring{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:21:16.356842743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&PodSandboxMetadata{Name:etcd-ha-767488,Uid:98b1224b4c3f88a8bf50c36863b9c250,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722256964081643719,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.217:2379,kubern
etes.io/config.hash: 98b1224b4c3f88a8bf50c36863b9c250,kubernetes.io/config.seen: 2024-07-29T12:21:06.619260538Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-767488,Uid:10b79fa17a00d66843eca1c032b6c3a0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722256964075848650,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10b79fa17a00d66843eca1c032b6c3a0,kubernetes.io/config.seen: 2024-07-29T12:21:06.619266825Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-767488,Uid:db6837
dfa9a0fa8b28ce8897488c95e3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722256964051928852,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{kubernetes.io/config.hash: db6837dfa9a0fa8b28ce8897488c95e3,kubernetes.io/config.seen: 2024-07-29T12:30:07.304947697Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:baafb1c5-8785-44de-ba07-d858ba337fce,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722256964050781733,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baa
fb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T12:21:32.996466419Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&PodSandboxMetadata{Name:kube-controller-manag
er-ha-767488,Uid:6429dcd204de47eb64e9eb4c7981c7df,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722256964044010839,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6429dcd204de47eb64e9eb4c7981c7df,kubernetes.io/config.seen: 2024-07-29T12:21:06.619265581Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qqt5t,Uid:21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722256960552438029,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:21:32.979775802Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-767488,Uid:b1d029e38f53e06a3c7b5c185fd64a06,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722256960515594095,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.217:8443,kubernetes.io/config.hash: b1d029e38f53e06a3c7b5c185fd64a06,kubernetes.io/config.seen: 2024-07-29T12:21:06.619264393Z,kubernetes.io/config.source: fi
le,},RuntimeHandler:,},&PodSandbox{Id:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-4ppv4,Uid:be02c819-68d0-4158-94c8-f6211d18670a,Namespace:default,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722256251476299868,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:24:12.653234554Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-trgfp,Uid:7969b6b0-a51a-4242-9ecd-1c2f60c5904b,Namespace:default,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722256241459620392,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.po
d.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:24:12.543243347Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-767488,Uid:db6837dfa9a0fa8b28ce8897488c95e3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722256222029553198,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{kubernetes.io/config.hash: db6837dfa9a0fa8b28ce8897488c95e3,kubernetes.io/config.seen: 2024-07-29T12:30:07.304947697Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f
7a078ff04cdea981,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-k6r5l,Uid:f3b8cd5e-1836-4118-847f-888cd3aa6dd7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722256219708342648,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:21:32.992506530Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qqt5t,Uid:21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722256216744143932,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:21:32.979775802Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&PodSandboxMetadata{Name:kube-proxy-sqk96,Uid:0730198e-117f-40fe-8b70-8a8364975298,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722256216697527423,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:21:16.353121323Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&
PodSandboxMetadata{Name:kindnet-6x56p,Uid:71b2a0c2-a003-42d6-b606-54d777bc10ee,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722256207675325830,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T12:21:16.356842743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-767488,Uid:10b79fa17a00d66843eca1c032b6c3a0,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722256207658124439,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10b79fa17a00d66843eca1c032b6c3a0,kubernetes.io/config.seen: 2024-07-29T12:21:06.619266825Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&PodSandboxMetadata{Name:etcd-ha-767488,Uid:98b1224b4c3f88a8bf50c36863b9c250,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722256207651389807,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.217:2379,kubernetes.io/config.hash: 98b1224b4c3f88a8bf50c36863b9c250,kubernetes.io/config.seen: 2024-07-29T12:21:06.619260538Z,kubern
etes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=115debcf-76bd-4ca1-8fd4-53e385f9b984 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.601274377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af8da9a7-7d85-4bb2-bf21-ce039bb5172a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.601361497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af8da9a7-7d85-4bb2-bf21-ce039bb5172a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.602574728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af8da9a7-7d85-4bb2-bf21-ce039bb5172a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.644258567Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2eedd751-48d2-49f6-ace8-9b440898d146 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.644393127Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2eedd751-48d2-49f6-ace8-9b440898d146 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.646194681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0846861-8451-4513-b91e-08e3c9d716c7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.646652175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722257401646628003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0846861-8451-4513-b91e-08e3c9d716c7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.649109961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea1b1782-c8f4-4101-ba33-5f8aa37ee032 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.649213187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea1b1782-c8f4-4101-ba33-5f8aa37ee032 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:50:01 ha-767488 crio[6770]: time="2024-07-29 12:50:01.650154271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722257205679398693,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 9,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722257157679385960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 4,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b,PodSandboxId:309c197fc5d3026fe5887ac8f85b4a3208f67502563a5aa976e8c6ad402a3454,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257106688290314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6429dcd204de47eb64e9eb4c7981c7df,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 8,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf19aeac6987974197a0c4fc5ef354cf20bc2cfc5683873ace6945ca2a3d83c4,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722257088688538659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fa3f76ed7ccda60c4c6e231a6887aa5229c2b25e6696966a476b9886d038b5,PodSandboxId:69a46f7b3f55baa72fdf5b1ae2cb64f450b9216cbcd0a3edd6803b0c48b41e2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257007681963333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baafb1c5-8785-44de-ba07-d858ba337fce,},Annotations:map[string]string{io.kubernetes.container.hash: 652d618a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b0c852c73cd36ba713615d693f2704d45118139c5677cd88d083912617daef,PodSandboxId:10b4f76c89c4ae4310d83c4946c797e37a19ad6264f2b8f3eae7f8c1f2d7c26e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722257002105739336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5267677fe3d34fad80faf3c7949935560d4b8794349dc0b074dc70c3429867,PodSandboxId:a2229e9d163bb77c610841ca6590b80077298eb172e4716a1dfd1479e0c25aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722256998333917463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf,PodSandboxId:8ed4d1a8b9e4974abdc5251351fb8c09c047e38bb1b36fb5216fc18bdbc8d157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722256969229544253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d029e38f53e06a3c7b5c185fd64a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3e08f0ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b,PodSandboxId:00984bd8001feb5ed9f77571c167e0803f7c15281c84a0804035a5d1d1ba10fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256969184377799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]string{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b384618e5ae14c67777ad9c0294426c93963be6110cbbda099994dc515fd3e10,PodSandboxId:2c2519cb3cb91c93edfcbe22fc0f10c0060f2fedf0e9bbc4dee120d59ac19946,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722256964992696502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0,PodSandboxId:33794e355298339573727419eb819362a6fbba43533c443606689368631c6ed7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722256964944547268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a8364975298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577,PodSandboxId:aab24bd3a9edf83f4494105a97d68b87240d7a362b49eeec83ef9e03042f16c9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722256964544785454,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584,PodSandboxId:b2a043288f89fcd77c823cdc1864f0ccad12b3e58b9073209c077a133e7d3c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722256964678578114,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba,PodSandboxId:6be19c0f23e95c03a878e75b7eb44f4fef3b8e9aad64ae174f0a2abb9a5f4b9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722256964588272188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c
032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828,PodSandboxId:b2a875cf8cfc199f2d4e139e07c304ad62106f9ac4b4993fa936cff56fd7c8f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722256964416223066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubern
etes.container.hash: f2243a41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18d760360355700e8a42faaba690a4a57ef58f437e0e7dbefd872256cc796a7a,PodSandboxId:4ac1d50b066bb2eb7a34bb7d199b90348d91669b762f37cdc03526a6d0df2951,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722256691386408722,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6837dfa9a0fa8b28ce8897488c95e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1e978a01d07fe5560351a9000228fcd6513bf1523fc69c552cfb9348a48738,PodSandboxId:6ff1b7f6ad7317cd752d49d26da2bdd58472e1bcf1b6e6326622f05f66bba0c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256251616439055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4ppv4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be02c819-68d0-4158-94c8-f6211d18670a,},Annotations:map[string]string{io.kubernetes.container.hash: ff023d68,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbea78e99e7243a706c470f06d9c35c916c801b42b3d4292eaebc32b040bd88,PodSandboxId:a7dc5254878c7e9d75a9ddca7dba85c5cf5cdd5d4f268437ed042f869f7056a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722256242073157926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-trgfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7969b6b0-a51a-4242-9ecd-1c2f60c5904b,},Annotations:map[string]string{io.kubernetes.container.hash: e7252d35,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b,PodSandboxId:464e80f1474daa7b0db4ff0092f58e13b090fc858e0ff28f7a078ff04cdea981,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256219945374530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k6r5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b8cd5e-1836-4118-847f-888cd3aa6dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b83be43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770,PodSandboxId:4e921577c4923d7b6e0c696432d1abd949520609041ab98de7c322d2ccda0520,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722256216950640136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqk96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0730198e-117f-40fe-8b70-8a836497
5298,},Annotations:map[string]string{io.kubernetes.container.hash: e92dcc64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722,PodSandboxId:6fd6fea36e81ffa4290fe1a361be26054cf988c1ac45e7156ba5e6fbec0beb5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722256216944029521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qqt5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c0e561-16ad-4e7c-9d2b-9fd551fb2d73,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 6e12a500,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad,PodSandboxId:ebff2bebd5529e4a88cb12b43ccd259c745394eec3146bd27184e0b6b2ce8483,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722256208279122761,Labels:map[string]string{io.kubernetes.con
tainer.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6x56p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b2a0c2-a003-42d6-b606-54d777bc10ee,},Annotations:map[string]string{io.kubernetes.container.hash: fe443934,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887,PodSandboxId:4d030101f0f826244ce88da9b0694fb5a9203b3a51c102e6ee59de3ad816bb65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722256208057598157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-ha-767488,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b79fa17a00d66843eca1c032b6c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf,PodSandboxId:c38a2d43be1535384115e15374717abe3880434bd6dbb72a6dcf2ae1a42119cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722256207875875673,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-767488,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b1224b4c3f88a8bf50c36863b9c250,},Annotations:map[string]string{io.kubernetes.container.hash: f2243a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea1b1782-c8f4-4101-ba33-5f8aa37ee032 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f525ef9d81722       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   3 minutes ago       Running             kube-controller-manager   9                   309c197fc5d30       kube-controller-manager-ha-767488
	e93850281207e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   4 minutes ago       Running             kube-apiserver            4                   8ed4d1a8b9e49       kube-apiserver-ha-767488
	2276a6710daab       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   4 minutes ago       Exited              kube-controller-manager   8                   309c197fc5d30       kube-controller-manager-ha-767488
	cf19aeac69879       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago       Running             storage-provisioner       5                   69a46f7b3f55b       storage-provisioner
	29fa3f76ed7cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Exited              storage-provisioner       4                   69a46f7b3f55b       storage-provisioner
	25b0c852c73cd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   2                   10b4f76c89c4a       busybox-fc5497c4f-trgfp
	8e5267677fe3d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   2                   a2229e9d163bb       busybox-fc5497c4f-4ppv4
	76489ee06a477       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   7 minutes ago       Exited              kube-apiserver            3                   8ed4d1a8b9e49       kube-apiserver-ha-767488
	9d1de005960b4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   7 minutes ago       Running             coredns                   2                   00984bd8001fe       coredns-7db6d8ff4d-qqt5t
	b384618e5ae14       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   7 minutes ago       Running             kube-vip                  2                   2c2519cb3cb91       kube-vip-ha-767488
	cc3c94fe6246a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   7 minutes ago       Running             kube-proxy                2                   33794e3552983       kube-proxy-sqk96
	2d5168de1ca60       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   7 minutes ago       Running             coredns                   2                   b2a043288f89f       coredns-7db6d8ff4d-k6r5l
	aa1dfc42a005d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   7 minutes ago       Running             kube-scheduler            2                   6be19c0f23e95       kube-scheduler-ha-767488
	b50ae6e8e38f6       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46   7 minutes ago       Running             kindnet-cni               2                   aab24bd3a9edf       kindnet-6x56p
	a311cff0c8ecc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 minutes ago       Running             etcd                      2                   b2a875cf8cfc1       etcd-ha-767488
	18d7603603557       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   11 minutes ago      Exited              kube-vip                  1                   4ac1d50b066bb       kube-vip-ha-767488
	3f1e978a01d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   19 minutes ago      Exited              busybox                   1                   6ff1b7f6ad731       busybox-fc5497c4f-4ppv4
	cbbea78e99e72       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   19 minutes ago      Exited              busybox                   1                   a7dc5254878c7       busybox-fc5497c4f-trgfp
	d899a73918641       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 minutes ago      Exited              coredns                   1                   464e80f1474da       coredns-7db6d8ff4d-k6r5l
	88ec5aa0ed7ec       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 minutes ago      Exited              kube-proxy                1                   4e921577c4923       kube-proxy-sqk96
	45379775c471b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 minutes ago      Exited              coredns                   1                   6fd6fea36e81f       coredns-7db6d8ff4d-qqt5t
	a327747c60c54       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46   19 minutes ago      Exited              kindnet-cni               1                   ebff2bebd5529       kindnet-6x56p
	5e886bb5a4a2e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   19 minutes ago      Exited              kube-scheduler            1                   4d030101f0f82       kube-scheduler-ha-767488
	5c8cded716df9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   19 minutes ago      Exited              etcd                      1                   c38a2d43be153       etcd-ha-767488
	
	
	==> coredns [2d5168de1ca60a1804768ba74dbfa0354d0345a9df24f0f768226a2be1bba584] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[283503875]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:42:53.754) (total time: 10001ms):
	Trace[283503875]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:43:03.755)
	Trace[283503875]: [10.001379228s] [10.001379228s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [45379775c471bd4a9ef1739d974345e80f77081cba400e4da29837d8833d8722] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[841416442]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.890) (total time: 11819ms):
	Trace[841416442]: ---"Objects listed" error:Unauthorized 11819ms (12:40:27.709)
	Trace[841416442]: [11.819152896s] [11.819152896s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2022085669]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.047) (total time: 12661ms):
	Trace[2022085669]: ---"Objects listed" error:Unauthorized 12661ms (12:40:27.709)
	Trace[2022085669]: [12.66151731s] [12.66151731s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1130676405]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:32.086) (total time: 10721ms):
	Trace[1130676405]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 10720ms (12:40:42.807)
	Trace[1130676405]: [10.721021558s] [10.721021558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3010": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d1de005960b44c240c3297855fd44976860ae0bb1f241b69ed4ebd6ad75874b] <==
	Trace[394769481]: [10.001110422s] [10.001110422s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53396->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53396->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53384->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53384->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53380->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53380->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d899a73918641072da6b9eb108383a2b7685b434530cc0a8f7e8a2e87f11493b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1030282606]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.356) (total time: 12346ms):
	Trace[1030282606]: ---"Objects listed" error:Unauthorized 12346ms (12:40:27.702)
	Trace[1030282606]: [12.346347085s] [12.346347085s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[418228940]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:15.563) (total time: 12139ms):
	Trace[418228940]: ---"Objects listed" error:Unauthorized 12138ms (12:40:27.702)
	Trace[418228940]: [12.139191986s] [12.139191986s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[2011977158]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:31.350) (total time: 11455ms):
	Trace[2011977158]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 11455ms (12:40:42.805)
	Trace[2011977158]: [11.45543795s] [11.45543795s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3039": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3048": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3048": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[856661345]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 12:40:31.528) (total time: 11278ms):
	Trace[856661345]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF 11278ms (12:40:42.807)
	Trace[856661345]: [11.278535864s] [11.278535864s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3042": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-767488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:49:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:48:41 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:48:41 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:48:41 +0000   Mon, 29 Jul 2024 12:21:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:48:41 +0000   Mon, 29 Jul 2024 12:21:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-767488
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4910accb98434efca56ff8b39068800c
	  System UUID:                4910accb-9843-4efc-a56f-f8b39068800c
	  Boot ID:                    f538ab8c-89b7-40ce-b82e-7644a867ee15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4ppv4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  default                     busybox-fc5497c4f-trgfp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 coredns-7db6d8ff4d-k6r5l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 coredns-7db6d8ff4d-qqt5t             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-ha-767488                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-6x56p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-ha-767488             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-767488    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-sqk96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-ha-767488             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-767488                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 6m36s                kube-proxy       
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 28m                  kube-proxy       
	  Normal   Starting                 29m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  29m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     29m (x7 over 29m)    kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    29m (x8 over 29m)    kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  29m (x8 over 29m)    kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     28m                  kubelet          Node ha-767488 status is now: NodeHasSufficientPID
	  Normal   Starting                 28m                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    28m                  kubelet          Node ha-767488 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  28m                  kubelet          Node ha-767488 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  28m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           28m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   NodeReady                28m                  kubelet          Node ha-767488 status is now: NodeReady
	  Normal   RegisteredNode           27m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           26m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           23m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Warning  ContainerGCFailed        7m56s (x4 over 20m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           6m25s                node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           3m5s                 node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           2m1s                 node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	  Normal   RegisteredNode           19s                  node-controller  Node ha-767488 event: Registered Node ha-767488 in Controller
	
	
	Name:               ha-767488-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_22_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:22:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:50:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:49:19 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:49:19 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:49:19 +0000   Mon, 29 Jul 2024 12:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:49:19 +0000   Mon, 29 Jul 2024 12:22:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-767488-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9a3fe2d6456464f8574d4c1d95e4f21
	  System UUID:                d9a3fe2d-6456-464f-8574-d4c1d95e4f21
	  Boot ID:                    5cef9760-b094-4a5a-943c-bf1eb8a249d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jjx77                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 etcd-ha-767488-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-l7jpd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-767488-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-767488-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-d9lg8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-767488-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-767488-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m7s                   kube-proxy       
	  Normal   Starting                 27m                    kube-proxy       
	  Normal   Starting                 23m                    kube-proxy       
	  Normal   Starting                 18m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           27m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           27m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           26m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Warning  Rebooted                 24m                    kubelet          Node ha-767488-m02 has been rebooted, boot id: 9ab58707-555a-4bb6-83c9-2399f8c434d4
	  Normal   NodeHasSufficientPID     24m                    kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 24m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  24m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  24m                    kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24m                    kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           23m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Warning  ContainerGCFailed        19m                    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           17m                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   Starting                 6m57s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m57s (x8 over 6m57s)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m57s (x8 over 6m57s)  kubelet          Node ha-767488-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m57s (x7 over 6m57s)  kubelet          Node ha-767488-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m25s                  node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           3m5s                   node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           2m1s                   node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	  Normal   RegisteredNode           19s                    node-controller  Node ha-767488-m02 event: Registered Node ha-767488-m02 in Controller
	
	
	Name:               ha-767488-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_23_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:23:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:50:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:47:58 +0000   Mon, 29 Jul 2024 12:47:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-767488-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ca168b2de41451a82ff59b787c535ad
	  System UUID:                5ca168b2-de41-451a-82ff-59b787c535ad
	  Boot ID:                    f8572197-e522-4d4c-92d1-3c0e30179060
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-767488-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kindnet-bz9pp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-767488-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-ha-767488-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-tzj27                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-767488-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-vip-ha-767488-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m15s                  kube-proxy       
	  Normal   Starting                 26m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    26m (x8 over 26m)      kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           26m                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     26m (x7 over 26m)      kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  26m (x8 over 26m)      kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           26m                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           26m                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           23m                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   NodeNotReady             22m                    node-controller  Node ha-767488-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           17m                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           6m25s                  node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           3m5s                   node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m34s (x2 over 2m35s)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x2 over 2m35s)  kubelet          Node ha-767488-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x2 over 2m35s)  kubelet          Node ha-767488-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m34s                  kubelet          Node ha-767488-m03 has been rebooted, boot id: f8572197-e522-4d4c-92d1-3c0e30179060
	  Normal   NodeReady                2m34s                  kubelet          Node ha-767488-m03 status is now: NodeReady
	  Normal   RegisteredNode           2m1s                   node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	  Normal   RegisteredNode           19s                    node-controller  Node ha-767488-m03 event: Registered Node ha-767488-m03 in Controller
	
	
	Name:               ha-767488-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_24_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:24:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:50:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:48:50 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:48:50 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:48:50 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:48:50 +0000   Mon, 29 Jul 2024 12:48:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-767488-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 326a5fb51aae42b7b8056fc3c9e53faf
	  System UUID:                326a5fb5-1aae-42b7-b805-6fc3c9e53faf
	  Boot ID:                    5525fcaf-d53c-41e4-a857-9519defa86cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bgb2n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-proxy-2m5gr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 25m                  kube-proxy       
	  Normal   Starting                 98s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  25m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     25m (x2 over 25m)    kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    25m (x2 over 25m)    kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  25m (x2 over 25m)    kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           25m                  node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           25m                  node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           25m                  node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   NodeReady                24m                  kubelet          Node ha-767488-m04 status is now: NodeReady
	  Normal   RegisteredNode           23m                  node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   NodeNotReady             22m                  node-controller  Node ha-767488-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           17m                  node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           6m25s                node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           3m5s                 node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   RegisteredNode           2m1s                 node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	  Normal   Starting                 103s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  102s (x2 over 102s)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s (x2 over 102s)  kubelet          Node ha-767488-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     102s (x2 over 102s)  kubelet          Node ha-767488-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 102s                 kubelet          Node ha-767488-m04 has been rebooted, boot id: 5525fcaf-d53c-41e4-a857-9519defa86cc
	  Normal   NodeReady                102s                 kubelet          Node ha-767488-m04 status is now: NodeReady
	  Normal   RegisteredNode           19s                  node-controller  Node ha-767488-m04 event: Registered Node ha-767488-m04 in Controller
	
	
	Name:               ha-767488-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-767488-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=ha-767488
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_49_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:49:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767488-m05
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:49:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:49:54 +0000   Mon, 29 Jul 2024 12:49:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:49:54 +0000   Mon, 29 Jul 2024 12:49:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:49:54 +0000   Mon, 29 Jul 2024 12:49:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:49:54 +0000   Mon, 29 Jul 2024 12:49:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-767488-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 07369ebad7c74bdca88f86988699fb71
	  System UUID:                07369eba-d7c7-4bdc-a88f-86988699fb71
	  Boot ID:                    e2fc7bfa-d1ad-4431-89cf-7afed400131a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-767488-m05                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         37s
	  kube-system                 kindnet-6kzhd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      39s
	  kube-system                 kube-apiserver-ha-767488-m05             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-controller-manager-ha-767488-m05    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-proxy-dhfx4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-scheduler-ha-767488-m05             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-vip-ha-767488-m05                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 35s                kube-proxy       
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node ha-767488-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node ha-767488-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x7 over 39s)  kubelet          Node ha-767488-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           36s                node-controller  Node ha-767488-m05 event: Registered Node ha-767488-m05 in Controller
	  Normal  RegisteredNode           35s                node-controller  Node ha-767488-m05 event: Registered Node ha-767488-m05 in Controller
	  Normal  RegisteredNode           35s                node-controller  Node ha-767488-m05 event: Registered Node ha-767488-m05 in Controller
	  Normal  RegisteredNode           19s                node-controller  Node ha-767488-m05 event: Registered Node ha-767488-m05 in Controller
	
	
	==> dmesg <==
	[ +10.417395] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.584590] kauditd_printk_skb: 34 callbacks suppressed
	[Jul29 12:22] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 12:25] kauditd_printk_skb: 10 callbacks suppressed
	[Jul29 12:30] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +0.152481] systemd-fstab-generator[3296]: Ignoring "noauto" option for root device
	[  +0.201233] systemd-fstab-generator[3310]: Ignoring "noauto" option for root device
	[  +0.141805] systemd-fstab-generator[3322]: Ignoring "noauto" option for root device
	[  +0.319718] systemd-fstab-generator[3350]: Ignoring "noauto" option for root device
	[  +4.887179] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.088800] kauditd_printk_skb: 100 callbacks suppressed
	[  +9.359831] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.040803] kauditd_printk_skb: 30 callbacks suppressed
	[ +16.902160] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.806405] kauditd_printk_skb: 5 callbacks suppressed
	[Jul29 12:42] systemd-fstab-generator[6690]: Ignoring "noauto" option for root device
	[  +0.156941] systemd-fstab-generator[6701]: Ignoring "noauto" option for root device
	[  +0.179346] systemd-fstab-generator[6715]: Ignoring "noauto" option for root device
	[  +0.150490] systemd-fstab-generator[6727]: Ignoring "noauto" option for root device
	[  +0.290091] systemd-fstab-generator[6755]: Ignoring "noauto" option for root device
	[  +8.442471] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.232725] systemd-fstab-generator[6987]: Ignoring "noauto" option for root device
	[  +4.882020] kauditd_printk_skb: 101 callbacks suppressed
	[Jul29 12:43] kauditd_printk_skb: 11 callbacks suppressed
	[ +20.903422] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [5c8cded716df9fd9e6e43d630eaed9773e3d3f4a99b24b1e2662048f450fddbf] <==
	{"level":"info","ts":"2024-07-29T12:40:51.117051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd [logterm: 7, index: 3480] sent MsgPreVote request to d9000071a51f92ea at term 7"}
	{"level":"info","ts":"2024-07-29T12:40:52.278962Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T12:40:52.279017Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	{"level":"warn","ts":"2024-07-29T12:40:52.279122Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.279145Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.290948Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:40:52.291007Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T12:40:52.291066Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a09c9983ac28f1fd","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T12:40:52.291289Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291329Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291362Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291459Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291525Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291589Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291602Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"30f76e47e42605a5"}
	{"level":"info","ts":"2024-07-29T12:40:52.291608Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.29172Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291751Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291782Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.291865Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9000071a51f92ea"}
	{"level":"info","ts":"2024-07-29T12:40:52.304523Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:40:52.30477Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-07-29T12:40:52.304858Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-767488","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> etcd [a311cff0c8eccdd5a09eaeacd7969122d9ddde39d014e13fa635c80a8c881828] <==
	{"level":"warn","ts":"2024-07-29T12:47:44.965528Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.230668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-767488-m03\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-29T12:47:44.966066Z","caller":"traceutil/trace.go:171","msg":"trace[453902694] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-767488-m03; range_end:; response_count:1; response_revision:3782; }","duration":"106.799466ms","start":"2024-07-29T12:47:44.859168Z","end":"2024-07-29T12:47:44.965968Z","steps":["trace[453902694] 'range keys from in-memory index tree'  (duration: 103.837804ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:47:50.971028Z","caller":"traceutil/trace.go:171","msg":"trace[1827493490] linearizableReadLoop","detail":"{readStateIndex:4330; appliedIndex:4330; }","duration":"111.574421ms","start":"2024-07-29T12:47:50.859424Z","end":"2024-07-29T12:47:50.970999Z","steps":["trace[1827493490] 'read index received'  (duration: 111.56917ms)","trace[1827493490] 'applied index is now lower than readState.Index'  (duration: 3.955µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:47:50.971255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.808091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-767488-m03\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-29T12:47:50.971313Z","caller":"traceutil/trace.go:171","msg":"trace[1184293185] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-767488-m03; range_end:; response_count:1; response_revision:3807; }","duration":"111.898697ms","start":"2024-07-29T12:47:50.8594Z","end":"2024-07-29T12:47:50.971299Z","steps":["trace[1184293185] 'agreement among raft nodes before linearized reading'  (duration: 111.702182ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:49:24.308023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(3528410088117503397 11573293933243462141 15636498394331976426) learners=(11036766341275668187)"}
	{"level":"info","ts":"2024-07-29T12:49:24.308421Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","added-peer-id":"992a788318b94edb","added-peer-peer-urls":["https://192.168.39.48:2380"]}
	{"level":"info","ts":"2024-07-29T12:49:24.308497Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.308543Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.310618Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.310715Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.310741Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.310759Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.310862Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:24.31094Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb","remote-peer-urls":["https://192.168.39.48:2380"]}
	{"level":"info","ts":"2024-07-29T12:49:24.312889Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"a09c9983ac28f1fd","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:25.668481Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:25.668552Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:25.711343Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"992a788318b94edb","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T12:49:25.711402Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:25.721013Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:25.744542Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"992a788318b94edb","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T12:49:25.744609Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"992a788318b94edb"}
	{"level":"info","ts":"2024-07-29T12:49:26.936934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(3528410088117503397 11036766341275668187 11573293933243462141 15636498394331976426)"}
	{"level":"info","ts":"2024-07-29T12:49:26.937198Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd"}
	
	
	==> kernel <==
	 12:50:02 up 29 min,  0 users,  load average: 0.44, 0.36, 0.32
	Linux ha-767488 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a327747c60c54b449c2a93db3bd91d7cabe5a45c6e98b63844422773b88816ad] <==
	I0729 12:40:29.352753       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:29.352910       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:29.352935       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:40:29.353062       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:29.353084       1 main.go:299] handling current node
	I0729 12:40:39.356408       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:39.356467       1 main.go:299] handling current node
	I0729 12:40:39.356485       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:40:39.356493       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:40:39.356693       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:40:39.356728       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:39.356862       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:39.356895       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	W0729 12:40:42.805541       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	E0729 12:40:42.805601       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	I0729 12:40:49.352084       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:40:49.352122       1 main.go:299] handling current node
	I0729 12:40:49.352136       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:40:49.352140       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:40:49.352268       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:40:49.352274       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:40:49.352317       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:40:49.352321       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	W0729 12:40:50.573724       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	E0729 12:40:50.573775       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3049": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> kindnet [b50ae6e8e38f62d2e29778940201dac4ae9487c76d83d6f2fc51b50ba9c57577] <==
	I0729 12:49:35.900328       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:49:35.900452       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:49:35.900480       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:49:35.900569       1 main.go:295] Handling node with IPs: map[192.168.39.48:{}]
	I0729 12:49:35.900593       1 main.go:322] Node ha-767488-m05 has CIDR [10.244.4.0/24] 
	I0729 12:49:45.899508       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:49:45.899688       1 main.go:299] handling current node
	I0729 12:49:45.899740       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:49:45.899763       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:49:45.900068       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:49:45.900124       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:49:45.900216       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:49:45.900241       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:49:45.900356       1 main.go:295] Handling node with IPs: map[192.168.39.48:{}]
	I0729 12:49:45.900424       1 main.go:322] Node ha-767488-m05 has CIDR [10.244.4.0/24] 
	I0729 12:49:55.899518       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0729 12:49:55.899661       1 main.go:299] handling current node
	I0729 12:49:55.899691       1 main.go:295] Handling node with IPs: map[192.168.39.45:{}]
	I0729 12:49:55.899723       1 main.go:322] Node ha-767488-m02 has CIDR [10.244.1.0/24] 
	I0729 12:49:55.899949       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0729 12:49:55.899997       1 main.go:322] Node ha-767488-m03 has CIDR [10.244.2.0/24] 
	I0729 12:49:55.900081       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0729 12:49:55.900102       1 main.go:322] Node ha-767488-m04 has CIDR [10.244.3.0/24] 
	I0729 12:49:55.900168       1 main.go:295] Handling node with IPs: map[192.168.39.48:{}]
	I0729 12:49:55.900187       1 main.go:322] Node ha-767488-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [76489ee06a4777248ed9c7fd562daf9d74eb130de043482ad38e6ad5a1844cdf] <==
	I0729 12:42:49.430000       1 options.go:221] external host was not specified, using 192.168.39.217
	I0729 12:42:49.431238       1 server.go:148] Version: v1.30.3
	I0729 12:42:49.431325       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:42:49.866306       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 12:42:49.878045       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:42:49.881570       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 12:42:49.881667       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 12:42:49.881920       1 instance.go:299] Using reconciler: lease
	W0729 12:43:09.865456       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 12:43:09.865695       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 12:43:09.882923       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e93850281207e87f03fc437f6794b568715decdd6504954ba8367283e92bcf15] <==
	I0729 12:45:59.628077       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 12:45:59.628961       1 aggregator.go:163] waiting for initial CRD sync...
	I0729 12:45:59.629024       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0729 12:45:59.629373       1 available_controller.go:423] Starting AvailableConditionController
	I0729 12:45:59.638565       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0729 12:45:59.676095       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 12:45:59.693227       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:45:59.693266       1 policy_source.go:224] refreshing policies
	I0729 12:45:59.696534       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 12:45:59.726284       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 12:45:59.731995       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 12:45:59.732057       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 12:45:59.732134       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 12:45:59.732180       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 12:45:59.732200       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 12:45:59.732874       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 12:45:59.733570       1 aggregator.go:165] initial CRD sync complete...
	I0729 12:45:59.733620       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 12:45:59.733627       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 12:45:59.733632       1 cache.go:39] Caches are synced for autoregister controller
	I0729 12:45:59.738603       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 12:46:00.638385       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 12:46:01.060195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.45]
	I0729 12:46:01.061970       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 12:46:01.070860       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b] <==
	I0729 12:45:07.191616       1 serving.go:380] Generated self-signed cert in-memory
	I0729 12:45:07.718594       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 12:45:07.718688       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:45:07.721330       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 12:45:07.723008       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:45:07.723288       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 12:45:07.723400       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 12:45:17.724994       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.217:8443/healthz\": dial tcp 192.168.39.217:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f525ef9d817220a42df6d260e402031bc0f40e9d8eccfa433f89cb8d789509a0] <==
	I0729 12:46:57.302912       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m03"
	I0729 12:46:57.303103       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 12:46:57.306059       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 12:46:57.316545       1 shared_informer.go:320] Caches are synced for disruption
	I0729 12:46:57.355364       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 12:46:57.366896       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 12:46:57.410323       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 12:46:57.414574       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:46:57.464419       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:46:57.471777       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 12:46:57.890551       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:46:57.958671       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:46:57.958714       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 12:47:28.903601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.335µs"
	I0729 12:47:29.046385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.149µs"
	I0729 12:47:29.064238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.904µs"
	I0729 12:47:29.065729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.519µs"
	I0729 12:48:20.069830       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-767488-m04"
	E0729 12:49:23.798967       1 certificate_controller.go:146] Sync csr-bsr6r failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-bsr6r": the object has been modified; please apply your changes to the latest version and try again
	E0729 12:49:23.803140       1 certificate_controller.go:146] Sync csr-bsr6r failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-bsr6r": the object has been modified; please apply your changes to the latest version and try again
	I0729 12:49:23.879066       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-767488-m05\" does not exist"
	I0729 12:49:23.880933       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-767488-m04"
	I0729 12:49:23.909669       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-767488-m05" podCIDRs=["10.244.4.0/24"]
	I0729 12:49:27.365522       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767488-m05"
	I0729 12:49:47.668674       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-767488-m04"
	
	
	==> kube-proxy [88ec5aa0ed7ecc3dc5a8c5528be305d387f6052f87d62b602f7249bf07514770] <==
	E0729 12:38:58.686582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.830714       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.830972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.831090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.831124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:04.831182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:04.831211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:14.047354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:14.048013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:14.047870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:14.048111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:17.119592       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:17.119666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:35.551618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:35.551700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:35.551992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:35.552162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:39:41.695764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:39:41.696014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:06.272423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:06.272868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&resourceVersion=2975": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:21.631308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:21.631558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3000": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:40:24.703209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:40:24.703408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2945": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [cc3c94fe6246ab5e4e5ebc15e5df7177339e6e1803a5de62af9f05fcf468e4c0] <==
	I0729 12:43:26.005579       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:43:26.005921       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:43:26.005961       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:43:26.008082       1 config.go:192] "Starting service config controller"
	I0729 12:43:26.008129       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:43:26.008154       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:43:26.008158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:43:26.009228       1 config.go:319] "Starting node config controller"
	I0729 12:43:26.009261       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0729 12:43:29.022916       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 12:43:29.023565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.023756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:29.023657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:29.023892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.023968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:29.024040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.094724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.094895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-767488&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 12:43:32.094978       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 12:43:32.095150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 12:43:33.908457       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:43:34.210378       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:43:34.908288       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5e886bb5a4a2efa60f1c13486b5b6179b7fb96f992a1ec20d9263cb7bd3ab887] <==
	W0729 12:40:22.243325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 12:40:22.243419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 12:40:23.541590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 12:40:23.541652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 12:40:24.030114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:24.030218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:24.827144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:24.827194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:25.963020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:40:25.963127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 12:40:27.525553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 12:40:27.525717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 12:40:31.457216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:40:31.457249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:40:31.946204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 12:40:31.946255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 12:40:31.987696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:40:31.987742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:40:32.539286       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:40:32.539318       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:40:33.993576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:40:33.993629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:40:34.509160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:40:34.509295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:40:52.283637       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aa1dfc42a005d85caf3f6c775efc543ba30e572f1a2aef81da961336b90306ba] <==
	W0729 12:45:39.021894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:39.022019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:40.035456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.217:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:40.035546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.217:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:41.325471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:41.325533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:53.830647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:53.830888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:54.268363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:54.268503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	W0729 12:45:54.603189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0729 12:45:54.603346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	I0729 12:46:02.189750       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 12:49:24.020347       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-s8hcn\": pod kube-proxy-s8hcn is already assigned to node \"ha-767488-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-s8hcn" node="ha-767488-m05"
	E0729 12:49:24.020932       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cfd78765-9085-4263-b6eb-42118268bc39(kube-system/kube-proxy-s8hcn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-s8hcn"
	E0729 12:49:24.021068       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-s8hcn\": pod kube-proxy-s8hcn is already assigned to node \"ha-767488-m05\"" pod="kube-system/kube-proxy-s8hcn"
	I0729 12:49:24.021283       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-s8hcn" node="ha-767488-m05"
	E0729 12:49:24.113163       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nxt4k\": pod kube-proxy-nxt4k is already assigned to node \"ha-767488-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nxt4k" node="ha-767488-m05"
	E0729 12:49:24.113465       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cee16aeb-6bb7-4a63-a560-ac46a6f443bb(kube-system/kube-proxy-nxt4k) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nxt4k"
	E0729 12:49:24.113782       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6kzhd\": pod kindnet-6kzhd is already assigned to node \"ha-767488-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-6kzhd" node="ha-767488-m05"
	E0729 12:49:24.115561       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0da6e183-04cd-4f86-b76b-af4382b3e9b8(kube-system/kindnet-6kzhd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6kzhd"
	E0729 12:49:24.115636       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6kzhd\": pod kindnet-6kzhd is already assigned to node \"ha-767488-m05\"" pod="kube-system/kindnet-6kzhd"
	I0729 12:49:24.115694       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6kzhd" node="ha-767488-m05"
	E0729 12:49:24.113990       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nxt4k\": pod kube-proxy-nxt4k is already assigned to node \"ha-767488-m05\"" pod="kube-system/kube-proxy-nxt4k"
	I0729 12:49:24.119237       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nxt4k" node="ha-767488-m05"
	
	
	==> kubelet <==
	Jul 29 12:46:06 ha-767488 kubelet[1381]: E0729 12:46:06.693101    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:46:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:46:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:46:19 ha-767488 kubelet[1381]: I0729 12:46:19.667641    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:46:19 ha-767488 kubelet[1381]: E0729 12:46:19.668493    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:46:32 ha-767488 kubelet[1381]: I0729 12:46:32.667323    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:46:32 ha-767488 kubelet[1381]: E0729 12:46:32.669010    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767488_kube-system(6429dcd204de47eb64e9eb4c7981c7df)\"" pod="kube-system/kube-controller-manager-ha-767488" podUID="6429dcd204de47eb64e9eb4c7981c7df"
	Jul 29 12:46:45 ha-767488 kubelet[1381]: I0729 12:46:45.667906    1381 scope.go:117] "RemoveContainer" containerID="2276a6710daab233cb8757ef722b8e0329b2cf2fbd551a34d51bf17ad366e88b"
	Jul 29 12:47:06 ha-767488 kubelet[1381]: E0729 12:47:06.688205    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:47:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:47:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:48:06 ha-767488 kubelet[1381]: E0729 12:48:06.688745    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:48:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:48:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:49:06 ha-767488 kubelet[1381]: E0729 12:49:06.683298    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:49:06 ha-767488 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:49:06 ha-767488 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:49:06 ha-767488 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:49:06 ha-767488 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:50:01.164460  263667 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19341-233093/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-767488 -n ha-767488
helpers_test.go:261: (dbg) Run:  kubectl --context ha-767488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-786745
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-786745
E0729 12:59:27.880903  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-786745: exit status 82 (2m1.847205989s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-786745-m03"  ...
	* Stopping node "multinode-786745-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-786745" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-786745 --wait=true -v=8 --alsologtostderr
E0729 13:02:18.313194  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-786745 --wait=true -v=8 --alsologtostderr: (3m22.960063554s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-786745
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-786745 -n multinode-786745
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-786745 logs -n 25: (1.486897082s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m02:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1996079696/001/cp-test_multinode-786745-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m02:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745:/home/docker/cp-test_multinode-786745-m02_multinode-786745.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n multinode-786745 sudo cat                                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-786745-m02_multinode-786745.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m02:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03:/home/docker/cp-test_multinode-786745-m02_multinode-786745-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n multinode-786745-m03 sudo cat                                   | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-786745-m02_multinode-786745-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp testdata/cp-test.txt                                                | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m03:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1996079696/001/cp-test_multinode-786745-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m03:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745:/home/docker/cp-test_multinode-786745-m03_multinode-786745.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n multinode-786745 sudo cat                                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-786745-m03_multinode-786745.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m03:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m02:/home/docker/cp-test_multinode-786745-m03_multinode-786745-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n multinode-786745-m02 sudo cat                                   | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-786745-m03_multinode-786745-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-786745 node stop m03                                                          | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	| node    | multinode-786745 node start                                                             | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:58 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-786745                                                                | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:58 UTC |                     |
	| stop    | -p multinode-786745                                                                     | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:58 UTC |                     |
	| start   | -p multinode-786745                                                                     | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 13:00 UTC | 29 Jul 24 13:03 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-786745                                                                | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 13:03 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:00:19
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:00:19.138434  270927 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:00:19.138682  270927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:00:19.138691  270927 out.go:304] Setting ErrFile to fd 2...
	I0729 13:00:19.138695  270927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:00:19.138911  270927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:00:19.139425  270927 out.go:298] Setting JSON to false
	I0729 13:00:19.140302  270927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9762,"bootTime":1722248257,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:00:19.140360  270927 start.go:139] virtualization: kvm guest
	I0729 13:00:19.142601  270927 out.go:177] * [multinode-786745] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:00:19.143900  270927 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:00:19.143904  270927 notify.go:220] Checking for updates...
	I0729 13:00:19.145308  270927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:00:19.146714  270927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:00:19.147841  270927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:00:19.149066  270927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:00:19.150251  270927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:00:19.151930  270927 config.go:182] Loaded profile config "multinode-786745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:00:19.152121  270927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:00:19.152558  270927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:00:19.152594  270927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:00:19.168188  270927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34867
	I0729 13:00:19.168571  270927 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:00:19.169177  270927 main.go:141] libmachine: Using API Version  1
	I0729 13:00:19.169199  270927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:00:19.169554  270927 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:00:19.169752  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:00:19.205592  270927 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:00:19.207073  270927 start.go:297] selected driver: kvm2
	I0729 13:00:19.207086  270927 start.go:901] validating driver "kvm2" against &{Name:multinode-786745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-786745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:00:19.207278  270927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:00:19.207681  270927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:00:19.207764  270927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:00:19.222517  270927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:00:19.223335  270927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:00:19.223375  270927 cni.go:84] Creating CNI manager for ""
	I0729 13:00:19.223384  270927 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 13:00:19.223496  270927 start.go:340] cluster config:
	{Name:multinode-786745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-786745 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:00:19.223731  270927 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:00:19.225686  270927 out.go:177] * Starting "multinode-786745" primary control-plane node in "multinode-786745" cluster
	I0729 13:00:19.226889  270927 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:00:19.226926  270927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:00:19.226939  270927 cache.go:56] Caching tarball of preloaded images
	I0729 13:00:19.227019  270927 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:00:19.227029  270927 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:00:19.227147  270927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/config.json ...
	I0729 13:00:19.227335  270927 start.go:360] acquireMachinesLock for multinode-786745: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:00:19.227376  270927 start.go:364] duration metric: took 24.47µs to acquireMachinesLock for "multinode-786745"
	I0729 13:00:19.227390  270927 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:00:19.227397  270927 fix.go:54] fixHost starting: 
	I0729 13:00:19.227649  270927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:00:19.227685  270927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:00:19.241823  270927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0729 13:00:19.242269  270927 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:00:19.242944  270927 main.go:141] libmachine: Using API Version  1
	I0729 13:00:19.242981  270927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:00:19.243300  270927 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:00:19.243477  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:00:19.243710  270927 main.go:141] libmachine: (multinode-786745) Calling .GetState
	I0729 13:00:19.245192  270927 fix.go:112] recreateIfNeeded on multinode-786745: state=Running err=<nil>
	W0729 13:00:19.245214  270927 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:00:19.247368  270927 out.go:177] * Updating the running kvm2 "multinode-786745" VM ...
	I0729 13:00:19.248976  270927 machine.go:94] provisionDockerMachine start ...
	I0729 13:00:19.248994  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:00:19.249199  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.251473  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.251923  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.251952  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.252055  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:00:19.252235  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.252416  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.252572  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:00:19.252753  270927 main.go:141] libmachine: Using SSH client type: native
	I0729 13:00:19.253049  270927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0729 13:00:19.253066  270927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:00:19.358302  270927 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-786745
	
	I0729 13:00:19.358342  270927 main.go:141] libmachine: (multinode-786745) Calling .GetMachineName
	I0729 13:00:19.358602  270927 buildroot.go:166] provisioning hostname "multinode-786745"
	I0729 13:00:19.358636  270927 main.go:141] libmachine: (multinode-786745) Calling .GetMachineName
	I0729 13:00:19.358882  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.361972  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.362417  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.362449  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.362602  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:00:19.362792  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.362981  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.363146  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:00:19.363345  270927 main.go:141] libmachine: Using SSH client type: native
	I0729 13:00:19.363516  270927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0729 13:00:19.363530  270927 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-786745 && echo "multinode-786745" | sudo tee /etc/hostname
	I0729 13:00:19.489905  270927 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-786745
	
	I0729 13:00:19.489931  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.492518  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.492939  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.492970  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.493175  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:00:19.493357  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.493547  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.493676  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:00:19.493865  270927 main.go:141] libmachine: Using SSH client type: native
	I0729 13:00:19.494121  270927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0729 13:00:19.494149  270927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-786745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-786745/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-786745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:00:19.597687  270927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:00:19.597729  270927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:00:19.597773  270927 buildroot.go:174] setting up certificates
	I0729 13:00:19.597783  270927 provision.go:84] configureAuth start
	I0729 13:00:19.597795  270927 main.go:141] libmachine: (multinode-786745) Calling .GetMachineName
	I0729 13:00:19.598081  270927 main.go:141] libmachine: (multinode-786745) Calling .GetIP
	I0729 13:00:19.600503  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.600847  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.600898  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.601037  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.603514  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.603963  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.604000  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.604165  270927 provision.go:143] copyHostCerts
	I0729 13:00:19.604197  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:00:19.604227  270927 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:00:19.604237  270927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:00:19.604320  270927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:00:19.604408  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:00:19.604426  270927 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:00:19.604433  270927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:00:19.604457  270927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:00:19.604537  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:00:19.604553  270927 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:00:19.604559  270927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:00:19.604587  270927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:00:19.604650  270927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.multinode-786745 san=[127.0.0.1 192.168.39.10 localhost minikube multinode-786745]
	I0729 13:00:19.778306  270927 provision.go:177] copyRemoteCerts
	I0729 13:00:19.778379  270927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:00:19.778404  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.780941  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.781307  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.781341  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.781516  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:00:19.781747  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.781917  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:00:19.782093  270927 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/multinode-786745/id_rsa Username:docker}
	I0729 13:00:19.866254  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 13:00:19.866338  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:00:19.896402  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 13:00:19.896469  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 13:00:19.922708  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 13:00:19.922788  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:00:19.948346  270927 provision.go:87] duration metric: took 350.547839ms to configureAuth
	I0729 13:00:19.948372  270927 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:00:19.948578  270927 config.go:182] Loaded profile config "multinode-786745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:00:19.948650  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.951153  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.951524  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.951547  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.951728  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:00:19.951941  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.952085  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.952212  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:00:19.952358  270927 main.go:141] libmachine: Using SSH client type: native
	I0729 13:00:19.952515  270927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0729 13:00:19.952531  270927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:01:50.771390  270927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:01:50.771431  270927 machine.go:97] duration metric: took 1m31.522440977s to provisionDockerMachine
	I0729 13:01:50.771444  270927 start.go:293] postStartSetup for "multinode-786745" (driver="kvm2")
	I0729 13:01:50.771455  270927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:01:50.771479  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:01:50.771856  270927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:01:50.771888  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:01:50.774794  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:50.775255  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:50.775284  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:50.775437  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:01:50.775654  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:01:50.775845  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:01:50.775976  270927 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/multinode-786745/id_rsa Username:docker}
	I0729 13:01:50.860510  270927 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:01:50.864358  270927 command_runner.go:130] > NAME=Buildroot
	I0729 13:01:50.864381  270927 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 13:01:50.864388  270927 command_runner.go:130] > ID=buildroot
	I0729 13:01:50.864399  270927 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 13:01:50.864407  270927 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 13:01:50.864459  270927 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:01:50.864475  270927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:01:50.864553  270927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:01:50.864690  270927 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:01:50.864706  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 13:01:50.864808  270927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:01:50.874090  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:01:50.896956  270927 start.go:296] duration metric: took 125.497949ms for postStartSetup
	I0729 13:01:50.896994  270927 fix.go:56] duration metric: took 1m31.669596653s for fixHost
	I0729 13:01:50.897018  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:01:50.899392  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:50.899756  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:50.899812  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:50.899932  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:01:50.900138  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:01:50.900298  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:01:50.900401  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:01:50.900581  270927 main.go:141] libmachine: Using SSH client type: native
	I0729 13:01:50.900745  270927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0729 13:01:50.900759  270927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:01:51.001463  270927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722258110.986148030
	
	I0729 13:01:51.001488  270927 fix.go:216] guest clock: 1722258110.986148030
	I0729 13:01:51.001495  270927 fix.go:229] Guest: 2024-07-29 13:01:50.98614803 +0000 UTC Remote: 2024-07-29 13:01:50.896998468 +0000 UTC m=+91.793639616 (delta=89.149562ms)
	I0729 13:01:51.001541  270927 fix.go:200] guest clock delta is within tolerance: 89.149562ms
	I0729 13:01:51.001548  270927 start.go:83] releasing machines lock for "multinode-786745", held for 1m31.774163124s
	I0729 13:01:51.001574  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:01:51.001888  270927 main.go:141] libmachine: (multinode-786745) Calling .GetIP
	I0729 13:01:51.004497  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.004945  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:51.004974  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.005139  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:01:51.005665  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:01:51.005880  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:01:51.005957  270927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:01:51.006000  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:01:51.006116  270927 ssh_runner.go:195] Run: cat /version.json
	I0729 13:01:51.006135  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:01:51.008460  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.008630  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.008854  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:51.008881  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.009025  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:01:51.009140  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:51.009169  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.009188  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:01:51.009337  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:01:51.009338  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:01:51.009488  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:01:51.009495  270927 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/multinode-786745/id_rsa Username:docker}
	I0729 13:01:51.009666  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:01:51.009826  270927 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/multinode-786745/id_rsa Username:docker}
	I0729 13:01:51.107379  270927 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 13:01:51.107472  270927 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 13:01:51.107561  270927 ssh_runner.go:195] Run: systemctl --version
	I0729 13:01:51.113224  270927 command_runner.go:130] > systemd 252 (252)
	I0729 13:01:51.113251  270927 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 13:01:51.113506  270927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:01:51.279065  270927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 13:01:51.286942  270927 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 13:01:51.287162  270927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:01:51.287232  270927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:01:51.298502  270927 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 13:01:51.298528  270927 start.go:495] detecting cgroup driver to use...
	I0729 13:01:51.298595  270927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:01:51.318825  270927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:01:51.334647  270927 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:01:51.334711  270927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:01:51.349061  270927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:01:51.362617  270927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:01:51.510176  270927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:01:51.650361  270927 docker.go:233] disabling docker service ...
	I0729 13:01:51.650429  270927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:01:51.668133  270927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:01:51.682041  270927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:01:51.818090  270927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:01:51.954254  270927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:01:51.968596  270927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:01:51.986321  270927 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 13:01:51.986638  270927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:01:51.986689  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:51.998083  270927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:01:51.998146  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.008456  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.019330  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.029762  270927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:01:52.040847  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.051356  270927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.061891  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.072165  270927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:01:52.081208  270927 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 13:01:52.081271  270927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:01:52.090464  270927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:01:52.225725  270927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:01:52.463138  270927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:01:52.463213  270927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:01:52.467970  270927 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 13:01:52.467990  270927 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 13:01:52.467996  270927 command_runner.go:130] > Device: 0,22	Inode: 1347        Links: 1
	I0729 13:01:52.468009  270927 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 13:01:52.468014  270927 command_runner.go:130] > Access: 2024-07-29 13:01:52.344171509 +0000
	I0729 13:01:52.468033  270927 command_runner.go:130] > Modify: 2024-07-29 13:01:52.344171509 +0000
	I0729 13:01:52.468040  270927 command_runner.go:130] > Change: 2024-07-29 13:01:52.344171509 +0000
	I0729 13:01:52.468044  270927 command_runner.go:130] >  Birth: -
	I0729 13:01:52.468055  270927 start.go:563] Will wait 60s for crictl version
	I0729 13:01:52.468093  270927 ssh_runner.go:195] Run: which crictl
	I0729 13:01:52.471887  270927 command_runner.go:130] > /usr/bin/crictl
	I0729 13:01:52.471949  270927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:01:52.507672  270927 command_runner.go:130] > Version:  0.1.0
	I0729 13:01:52.507694  270927 command_runner.go:130] > RuntimeName:  cri-o
	I0729 13:01:52.507702  270927 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 13:01:52.507710  270927 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 13:01:52.507840  270927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:01:52.507913  270927 ssh_runner.go:195] Run: crio --version
	I0729 13:01:52.535049  270927 command_runner.go:130] > crio version 1.29.1
	I0729 13:01:52.535072  270927 command_runner.go:130] > Version:        1.29.1
	I0729 13:01:52.535080  270927 command_runner.go:130] > GitCommit:      unknown
	I0729 13:01:52.535086  270927 command_runner.go:130] > GitCommitDate:  unknown
	I0729 13:01:52.535091  270927 command_runner.go:130] > GitTreeState:   clean
	I0729 13:01:52.535099  270927 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 13:01:52.535105  270927 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 13:01:52.535111  270927 command_runner.go:130] > Compiler:       gc
	I0729 13:01:52.535120  270927 command_runner.go:130] > Platform:       linux/amd64
	I0729 13:01:52.535130  270927 command_runner.go:130] > Linkmode:       dynamic
	I0729 13:01:52.535137  270927 command_runner.go:130] > BuildTags:      
	I0729 13:01:52.535145  270927 command_runner.go:130] >   containers_image_ostree_stub
	I0729 13:01:52.535152  270927 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 13:01:52.535190  270927 command_runner.go:130] >   btrfs_noversion
	I0729 13:01:52.535206  270927 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 13:01:52.535214  270927 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 13:01:52.535220  270927 command_runner.go:130] >   seccomp
	I0729 13:01:52.535228  270927 command_runner.go:130] > LDFlags:          unknown
	I0729 13:01:52.535236  270927 command_runner.go:130] > SeccompEnabled:   true
	I0729 13:01:52.535246  270927 command_runner.go:130] > AppArmorEnabled:  false
	I0729 13:01:52.536356  270927 ssh_runner.go:195] Run: crio --version
	I0729 13:01:52.562387  270927 command_runner.go:130] > crio version 1.29.1
	I0729 13:01:52.562416  270927 command_runner.go:130] > Version:        1.29.1
	I0729 13:01:52.562425  270927 command_runner.go:130] > GitCommit:      unknown
	I0729 13:01:52.562431  270927 command_runner.go:130] > GitCommitDate:  unknown
	I0729 13:01:52.562438  270927 command_runner.go:130] > GitTreeState:   clean
	I0729 13:01:52.562447  270927 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 13:01:52.562454  270927 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 13:01:52.562461  270927 command_runner.go:130] > Compiler:       gc
	I0729 13:01:52.562469  270927 command_runner.go:130] > Platform:       linux/amd64
	I0729 13:01:52.562479  270927 command_runner.go:130] > Linkmode:       dynamic
	I0729 13:01:52.562485  270927 command_runner.go:130] > BuildTags:      
	I0729 13:01:52.562491  270927 command_runner.go:130] >   containers_image_ostree_stub
	I0729 13:01:52.562498  270927 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 13:01:52.562505  270927 command_runner.go:130] >   btrfs_noversion
	I0729 13:01:52.562514  270927 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 13:01:52.562521  270927 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 13:01:52.562529  270927 command_runner.go:130] >   seccomp
	I0729 13:01:52.562537  270927 command_runner.go:130] > LDFlags:          unknown
	I0729 13:01:52.562546  270927 command_runner.go:130] > SeccompEnabled:   true
	I0729 13:01:52.562553  270927 command_runner.go:130] > AppArmorEnabled:  false
	I0729 13:01:52.568668  270927 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:01:52.573055  270927 main.go:141] libmachine: (multinode-786745) Calling .GetIP
	I0729 13:01:52.575318  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:52.575621  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:52.575665  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:52.575850  270927 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:01:52.580174  270927 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 13:01:52.580274  270927 kubeadm.go:883] updating cluster {Name:multinode-786745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-786745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:01:52.580423  270927 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:01:52.580473  270927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:01:52.629879  270927 command_runner.go:130] > {
	I0729 13:01:52.629902  270927 command_runner.go:130] >   "images": [
	I0729 13:01:52.629908  270927 command_runner.go:130] >     {
	I0729 13:01:52.629923  270927 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 13:01:52.629929  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.629986  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 13:01:52.630003  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630011  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630032  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 13:01:52.630048  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 13:01:52.630056  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630065  270927 command_runner.go:130] >       "size": "87165492",
	I0729 13:01:52.630073  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.630080  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630094  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630104  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630113  270927 command_runner.go:130] >     },
	I0729 13:01:52.630119  270927 command_runner.go:130] >     {
	I0729 13:01:52.630132  270927 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 13:01:52.630142  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630154  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 13:01:52.630166  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630175  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630187  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 13:01:52.630202  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 13:01:52.630210  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630218  270927 command_runner.go:130] >       "size": "87174707",
	I0729 13:01:52.630226  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.630237  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630247  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630254  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630262  270927 command_runner.go:130] >     },
	I0729 13:01:52.630269  270927 command_runner.go:130] >     {
	I0729 13:01:52.630280  270927 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 13:01:52.630289  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630299  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 13:01:52.630308  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630315  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630331  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 13:01:52.630347  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 13:01:52.630354  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630361  270927 command_runner.go:130] >       "size": "1363676",
	I0729 13:01:52.630369  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.630378  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630387  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630397  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630404  270927 command_runner.go:130] >     },
	I0729 13:01:52.630411  270927 command_runner.go:130] >     {
	I0729 13:01:52.630424  270927 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 13:01:52.630432  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630442  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 13:01:52.630450  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630457  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630472  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 13:01:52.630492  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 13:01:52.630500  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630508  270927 command_runner.go:130] >       "size": "31470524",
	I0729 13:01:52.630517  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.630527  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630534  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630541  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630550  270927 command_runner.go:130] >     },
	I0729 13:01:52.630558  270927 command_runner.go:130] >     {
	I0729 13:01:52.630571  270927 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 13:01:52.630580  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630589  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 13:01:52.630598  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630606  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630621  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 13:01:52.630636  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 13:01:52.630643  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630651  270927 command_runner.go:130] >       "size": "61245718",
	I0729 13:01:52.630660  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.630669  270927 command_runner.go:130] >       "username": "nonroot",
	I0729 13:01:52.630679  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630688  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630696  270927 command_runner.go:130] >     },
	I0729 13:01:52.630703  270927 command_runner.go:130] >     {
	I0729 13:01:52.630715  270927 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 13:01:52.630723  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630732  270927 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 13:01:52.630739  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630746  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630761  270927 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 13:01:52.630775  270927 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 13:01:52.630784  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630794  270927 command_runner.go:130] >       "size": "150779692",
	I0729 13:01:52.630804  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.630811  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.630817  270927 command_runner.go:130] >       },
	I0729 13:01:52.630824  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630831  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630841  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630848  270927 command_runner.go:130] >     },
	I0729 13:01:52.630856  270927 command_runner.go:130] >     {
	I0729 13:01:52.630867  270927 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 13:01:52.630880  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630891  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 13:01:52.630899  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630906  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630921  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 13:01:52.630936  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 13:01:52.630944  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630953  270927 command_runner.go:130] >       "size": "117609954",
	I0729 13:01:52.630961  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.630968  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.630976  270927 command_runner.go:130] >       },
	I0729 13:01:52.630984  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630992  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.631006  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.631013  270927 command_runner.go:130] >     },
	I0729 13:01:52.631029  270927 command_runner.go:130] >     {
	I0729 13:01:52.631042  270927 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 13:01:52.631051  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.631062  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 13:01:52.631071  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631080  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.631101  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 13:01:52.631115  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 13:01:52.631120  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631128  270927 command_runner.go:130] >       "size": "112198984",
	I0729 13:01:52.631138  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.631148  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.631155  270927 command_runner.go:130] >       },
	I0729 13:01:52.631163  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.631169  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.631177  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.631182  270927 command_runner.go:130] >     },
	I0729 13:01:52.631187  270927 command_runner.go:130] >     {
	I0729 13:01:52.631195  270927 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 13:01:52.631201  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.631208  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 13:01:52.631212  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631217  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.631229  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 13:01:52.631240  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 13:01:52.631246  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631253  270927 command_runner.go:130] >       "size": "85953945",
	I0729 13:01:52.631259  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.631266  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.631272  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.631279  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.631285  270927 command_runner.go:130] >     },
	I0729 13:01:52.631290  270927 command_runner.go:130] >     {
	I0729 13:01:52.631300  270927 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 13:01:52.631307  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.631314  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 13:01:52.631322  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631330  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.631343  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 13:01:52.631358  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 13:01:52.631366  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631375  270927 command_runner.go:130] >       "size": "63051080",
	I0729 13:01:52.631385  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.631394  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.631401  270927 command_runner.go:130] >       },
	I0729 13:01:52.631413  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.631421  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.631428  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.631434  270927 command_runner.go:130] >     },
	I0729 13:01:52.631442  270927 command_runner.go:130] >     {
	I0729 13:01:52.631452  270927 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 13:01:52.631461  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.631471  270927 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 13:01:52.631479  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631486  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.631501  270927 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 13:01:52.631515  270927 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 13:01:52.631524  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631531  270927 command_runner.go:130] >       "size": "750414",
	I0729 13:01:52.631540  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.631548  270927 command_runner.go:130] >         "value": "65535"
	I0729 13:01:52.631556  270927 command_runner.go:130] >       },
	I0729 13:01:52.631564  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.631573  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.631582  270927 command_runner.go:130] >       "pinned": true
	I0729 13:01:52.631591  270927 command_runner.go:130] >     }
	I0729 13:01:52.631597  270927 command_runner.go:130] >   ]
	I0729 13:01:52.631603  270927 command_runner.go:130] > }
	I0729 13:01:52.631848  270927 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:01:52.631865  270927 crio.go:433] Images already preloaded, skipping extraction
	I0729 13:01:52.631929  270927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:01:52.669878  270927 command_runner.go:130] > {
	I0729 13:01:52.669905  270927 command_runner.go:130] >   "images": [
	I0729 13:01:52.669911  270927 command_runner.go:130] >     {
	I0729 13:01:52.669920  270927 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 13:01:52.669926  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.669932  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 13:01:52.669936  270927 command_runner.go:130] >       ],
	I0729 13:01:52.669940  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.669948  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 13:01:52.669956  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 13:01:52.669960  270927 command_runner.go:130] >       ],
	I0729 13:01:52.669966  270927 command_runner.go:130] >       "size": "87165492",
	I0729 13:01:52.669974  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.669981  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.669991  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670005  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670013  270927 command_runner.go:130] >     },
	I0729 13:01:52.670016  270927 command_runner.go:130] >     {
	I0729 13:01:52.670022  270927 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 13:01:52.670026  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670031  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 13:01:52.670035  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670040  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670048  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 13:01:52.670062  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 13:01:52.670072  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670084  270927 command_runner.go:130] >       "size": "87174707",
	I0729 13:01:52.670093  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.670104  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670113  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670120  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670124  270927 command_runner.go:130] >     },
	I0729 13:01:52.670130  270927 command_runner.go:130] >     {
	I0729 13:01:52.670137  270927 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 13:01:52.670147  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670158  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 13:01:52.670167  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670176  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670190  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 13:01:52.670204  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 13:01:52.670211  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670216  270927 command_runner.go:130] >       "size": "1363676",
	I0729 13:01:52.670225  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.670235  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670246  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670255  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670263  270927 command_runner.go:130] >     },
	I0729 13:01:52.670271  270927 command_runner.go:130] >     {
	I0729 13:01:52.670284  270927 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 13:01:52.670293  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670300  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 13:01:52.670306  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670313  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670328  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 13:01:52.670348  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 13:01:52.670356  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670363  270927 command_runner.go:130] >       "size": "31470524",
	I0729 13:01:52.670371  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.670380  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670384  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670392  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670397  270927 command_runner.go:130] >     },
	I0729 13:01:52.670407  270927 command_runner.go:130] >     {
	I0729 13:01:52.670420  270927 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 13:01:52.670429  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670440  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 13:01:52.670449  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670457  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670472  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 13:01:52.670485  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 13:01:52.670493  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670499  270927 command_runner.go:130] >       "size": "61245718",
	I0729 13:01:52.670507  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.670512  270927 command_runner.go:130] >       "username": "nonroot",
	I0729 13:01:52.670521  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670527  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670534  270927 command_runner.go:130] >     },
	I0729 13:01:52.670539  270927 command_runner.go:130] >     {
	I0729 13:01:52.670551  270927 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 13:01:52.670559  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670566  270927 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 13:01:52.670574  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670580  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670592  270927 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 13:01:52.670605  270927 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 13:01:52.670611  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670620  270927 command_runner.go:130] >       "size": "150779692",
	I0729 13:01:52.670626  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.670635  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.670644  270927 command_runner.go:130] >       },
	I0729 13:01:52.670651  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670660  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670666  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670674  270927 command_runner.go:130] >     },
	I0729 13:01:52.670680  270927 command_runner.go:130] >     {
	I0729 13:01:52.670693  270927 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 13:01:52.670702  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670711  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 13:01:52.670721  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670731  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670745  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 13:01:52.670757  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 13:01:52.670765  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670770  270927 command_runner.go:130] >       "size": "117609954",
	I0729 13:01:52.670773  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.670778  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.670781  270927 command_runner.go:130] >       },
	I0729 13:01:52.670787  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670793  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670797  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670801  270927 command_runner.go:130] >     },
	I0729 13:01:52.670804  270927 command_runner.go:130] >     {
	I0729 13:01:52.670810  270927 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 13:01:52.670816  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670823  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 13:01:52.670828  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670832  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670848  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 13:01:52.670858  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 13:01:52.670861  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670868  270927 command_runner.go:130] >       "size": "112198984",
	I0729 13:01:52.670872  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.670878  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.670882  270927 command_runner.go:130] >       },
	I0729 13:01:52.670887  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670892  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670896  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670902  270927 command_runner.go:130] >     },
	I0729 13:01:52.670906  270927 command_runner.go:130] >     {
	I0729 13:01:52.670912  270927 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 13:01:52.670918  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670923  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 13:01:52.670929  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670933  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670943  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 13:01:52.670951  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 13:01:52.670957  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670960  270927 command_runner.go:130] >       "size": "85953945",
	I0729 13:01:52.670964  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.670968  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670972  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670976  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670981  270927 command_runner.go:130] >     },
	I0729 13:01:52.670984  270927 command_runner.go:130] >     {
	I0729 13:01:52.670993  270927 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 13:01:52.671000  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.671007  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 13:01:52.671011  270927 command_runner.go:130] >       ],
	I0729 13:01:52.671015  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.671023  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 13:01:52.671032  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 13:01:52.671036  270927 command_runner.go:130] >       ],
	I0729 13:01:52.671041  270927 command_runner.go:130] >       "size": "63051080",
	I0729 13:01:52.671047  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.671051  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.671054  270927 command_runner.go:130] >       },
	I0729 13:01:52.671058  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.671064  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.671068  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.671074  270927 command_runner.go:130] >     },
	I0729 13:01:52.671077  270927 command_runner.go:130] >     {
	I0729 13:01:52.671084  270927 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 13:01:52.671089  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.671094  270927 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 13:01:52.671098  270927 command_runner.go:130] >       ],
	I0729 13:01:52.671102  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.671111  270927 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 13:01:52.671118  270927 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 13:01:52.671124  270927 command_runner.go:130] >       ],
	I0729 13:01:52.671127  270927 command_runner.go:130] >       "size": "750414",
	I0729 13:01:52.671132  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.671136  270927 command_runner.go:130] >         "value": "65535"
	I0729 13:01:52.671140  270927 command_runner.go:130] >       },
	I0729 13:01:52.671147  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.671150  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.671154  270927 command_runner.go:130] >       "pinned": true
	I0729 13:01:52.671158  270927 command_runner.go:130] >     }
	I0729 13:01:52.671161  270927 command_runner.go:130] >   ]
	I0729 13:01:52.671164  270927 command_runner.go:130] > }
	I0729 13:01:52.671282  270927 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:01:52.671294  270927 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:01:52.671303  270927 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.30.3 crio true true} ...
	I0729 13:01:52.671420  270927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-786745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-786745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:01:52.671522  270927 ssh_runner.go:195] Run: crio config
	I0729 13:01:52.712408  270927 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 13:01:52.712444  270927 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 13:01:52.712455  270927 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 13:01:52.712461  270927 command_runner.go:130] > #
	I0729 13:01:52.712472  270927 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 13:01:52.712481  270927 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 13:01:52.712490  270927 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 13:01:52.712503  270927 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 13:01:52.712512  270927 command_runner.go:130] > # reload'.
	I0729 13:01:52.712521  270927 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 13:01:52.712532  270927 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 13:01:52.712544  270927 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 13:01:52.712555  270927 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 13:01:52.712562  270927 command_runner.go:130] > [crio]
	I0729 13:01:52.712571  270927 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 13:01:52.712582  270927 command_runner.go:130] > # containers images, in this directory.
	I0729 13:01:52.712592  270927 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 13:01:52.712605  270927 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 13:01:52.712689  270927 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 13:01:52.712730  270927 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 13:01:52.712994  270927 command_runner.go:130] > # imagestore = ""
	I0729 13:01:52.713024  270927 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 13:01:52.713034  270927 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 13:01:52.713146  270927 command_runner.go:130] > storage_driver = "overlay"
	I0729 13:01:52.713162  270927 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 13:01:52.713171  270927 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 13:01:52.713181  270927 command_runner.go:130] > storage_option = [
	I0729 13:01:52.713332  270927 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 13:01:52.713341  270927 command_runner.go:130] > ]
	I0729 13:01:52.713351  270927 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 13:01:52.713360  270927 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 13:01:52.713661  270927 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 13:01:52.713680  270927 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 13:01:52.713689  270927 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 13:01:52.713696  270927 command_runner.go:130] > # always happen on a node reboot
	I0729 13:01:52.713881  270927 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 13:01:52.713902  270927 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 13:01:52.713922  270927 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 13:01:52.713934  270927 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 13:01:52.713984  270927 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 13:01:52.714011  270927 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 13:01:52.714024  270927 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 13:01:52.714209  270927 command_runner.go:130] > # internal_wipe = true
	I0729 13:01:52.714221  270927 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 13:01:52.714230  270927 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 13:01:52.714389  270927 command_runner.go:130] > # internal_repair = false
	I0729 13:01:52.714398  270927 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 13:01:52.714407  270927 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 13:01:52.714417  270927 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 13:01:52.714738  270927 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 13:01:52.714748  270927 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 13:01:52.714753  270927 command_runner.go:130] > [crio.api]
	I0729 13:01:52.714761  270927 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 13:01:52.714942  270927 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 13:01:52.714961  270927 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 13:01:52.715185  270927 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 13:01:52.715202  270927 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 13:01:52.715210  270927 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 13:01:52.715428  270927 command_runner.go:130] > # stream_port = "0"
	I0729 13:01:52.715444  270927 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 13:01:52.715709  270927 command_runner.go:130] > # stream_enable_tls = false
	I0729 13:01:52.715723  270927 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 13:01:52.715966  270927 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 13:01:52.715980  270927 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 13:01:52.715989  270927 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 13:01:52.715996  270927 command_runner.go:130] > # minutes.
	I0729 13:01:52.716106  270927 command_runner.go:130] > # stream_tls_cert = ""
	I0729 13:01:52.716123  270927 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 13:01:52.716133  270927 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 13:01:52.716313  270927 command_runner.go:130] > # stream_tls_key = ""
	I0729 13:01:52.716326  270927 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 13:01:52.716336  270927 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 13:01:52.716355  270927 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 13:01:52.716621  270927 command_runner.go:130] > # stream_tls_ca = ""
	I0729 13:01:52.716639  270927 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 13:01:52.716649  270927 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 13:01:52.716660  270927 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 13:01:52.716672  270927 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 13:01:52.716681  270927 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 13:01:52.716691  270927 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 13:01:52.716700  270927 command_runner.go:130] > [crio.runtime]
	I0729 13:01:52.716710  270927 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 13:01:52.716721  270927 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 13:01:52.716730  270927 command_runner.go:130] > # "nofile=1024:2048"
	I0729 13:01:52.716742  270927 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 13:01:52.716752  270927 command_runner.go:130] > # default_ulimits = [
	I0729 13:01:52.716759  270927 command_runner.go:130] > # ]
	I0729 13:01:52.716772  270927 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 13:01:52.716841  270927 command_runner.go:130] > # no_pivot = false
	I0729 13:01:52.716861  270927 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 13:01:52.716870  270927 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 13:01:52.716878  270927 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 13:01:52.716888  270927 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 13:01:52.716898  270927 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 13:01:52.716910  270927 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 13:01:52.716920  270927 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 13:01:52.716927  270927 command_runner.go:130] > # Cgroup setting for conmon
	I0729 13:01:52.716939  270927 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 13:01:52.716948  270927 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 13:01:52.716957  270927 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 13:01:52.716968  270927 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 13:01:52.716980  270927 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 13:01:52.716990  270927 command_runner.go:130] > conmon_env = [
	I0729 13:01:52.717001  270927 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 13:01:52.717009  270927 command_runner.go:130] > ]
	I0729 13:01:52.717018  270927 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 13:01:52.717027  270927 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 13:01:52.717039  270927 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 13:01:52.717046  270927 command_runner.go:130] > # default_env = [
	I0729 13:01:52.717054  270927 command_runner.go:130] > # ]
	I0729 13:01:52.717064  270927 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 13:01:52.717079  270927 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 13:01:52.717145  270927 command_runner.go:130] > # selinux = false
	I0729 13:01:52.717166  270927 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 13:01:52.717188  270927 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 13:01:52.717199  270927 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 13:01:52.717207  270927 command_runner.go:130] > # seccomp_profile = ""
	I0729 13:01:52.717219  270927 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 13:01:52.717228  270927 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 13:01:52.717244  270927 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 13:01:52.717254  270927 command_runner.go:130] > # which might increase security.
	I0729 13:01:52.717264  270927 command_runner.go:130] > # This option is currently deprecated,
	I0729 13:01:52.717276  270927 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 13:01:52.717285  270927 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 13:01:52.717296  270927 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 13:01:52.717308  270927 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 13:01:52.717319  270927 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 13:01:52.717332  270927 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 13:01:52.717342  270927 command_runner.go:130] > # This option supports live configuration reload.
	I0729 13:01:52.717351  270927 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 13:01:52.717363  270927 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 13:01:52.717374  270927 command_runner.go:130] > # the cgroup blockio controller.
	I0729 13:01:52.717381  270927 command_runner.go:130] > # blockio_config_file = ""
	I0729 13:01:52.717393  270927 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 13:01:52.717402  270927 command_runner.go:130] > # blockio parameters.
	I0729 13:01:52.717410  270927 command_runner.go:130] > # blockio_reload = false
	I0729 13:01:52.717423  270927 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 13:01:52.717430  270927 command_runner.go:130] > # irqbalance daemon.
	I0729 13:01:52.717440  270927 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 13:01:52.717452  270927 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 13:01:52.717466  270927 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 13:01:52.717482  270927 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 13:01:52.717494  270927 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 13:01:52.717507  270927 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 13:01:52.717518  270927 command_runner.go:130] > # This option supports live configuration reload.
	I0729 13:01:52.717528  270927 command_runner.go:130] > # rdt_config_file = ""
	I0729 13:01:52.717539  270927 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 13:01:52.717551  270927 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 13:01:52.717575  270927 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 13:01:52.717587  270927 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 13:01:52.717598  270927 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 13:01:52.717612  270927 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 13:01:52.717621  270927 command_runner.go:130] > # will be added.
	I0729 13:01:52.717627  270927 command_runner.go:130] > # default_capabilities = [
	I0729 13:01:52.717636  270927 command_runner.go:130] > # 	"CHOWN",
	I0729 13:01:52.717644  270927 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 13:01:52.717653  270927 command_runner.go:130] > # 	"FSETID",
	I0729 13:01:52.717658  270927 command_runner.go:130] > # 	"FOWNER",
	I0729 13:01:52.717665  270927 command_runner.go:130] > # 	"SETGID",
	I0729 13:01:52.717673  270927 command_runner.go:130] > # 	"SETUID",
	I0729 13:01:52.717678  270927 command_runner.go:130] > # 	"SETPCAP",
	I0729 13:01:52.717695  270927 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 13:01:52.717705  270927 command_runner.go:130] > # 	"KILL",
	I0729 13:01:52.717710  270927 command_runner.go:130] > # ]
	I0729 13:01:52.717729  270927 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 13:01:52.717743  270927 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 13:01:52.717758  270927 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 13:01:52.717773  270927 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 13:01:52.717785  270927 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 13:01:52.717794  270927 command_runner.go:130] > default_sysctls = [
	I0729 13:01:52.717802  270927 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 13:01:52.717810  270927 command_runner.go:130] > ]
	I0729 13:01:52.717818  270927 command_runner.go:130] > # List of devices on the host that a
	I0729 13:01:52.717829  270927 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 13:01:52.717839  270927 command_runner.go:130] > # allowed_devices = [
	I0729 13:01:52.717848  270927 command_runner.go:130] > # 	"/dev/fuse",
	I0729 13:01:52.717855  270927 command_runner.go:130] > # ]
	I0729 13:01:52.717862  270927 command_runner.go:130] > # List of additional devices. specified as
	I0729 13:01:52.717875  270927 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 13:01:52.717887  270927 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 13:01:52.717899  270927 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 13:01:52.717909  270927 command_runner.go:130] > # additional_devices = [
	I0729 13:01:52.717914  270927 command_runner.go:130] > # ]
	I0729 13:01:52.717925  270927 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 13:01:52.717932  270927 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 13:01:52.717941  270927 command_runner.go:130] > # 	"/etc/cdi",
	I0729 13:01:52.717948  270927 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 13:01:52.717956  270927 command_runner.go:130] > # ]
	I0729 13:01:52.717965  270927 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 13:01:52.717978  270927 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 13:01:52.717987  270927 command_runner.go:130] > # Defaults to false.
	I0729 13:01:52.718006  270927 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 13:01:52.718018  270927 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 13:01:52.718028  270927 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 13:01:52.718038  270927 command_runner.go:130] > # hooks_dir = [
	I0729 13:01:52.718045  270927 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 13:01:52.718052  270927 command_runner.go:130] > # ]
	I0729 13:01:52.718062  270927 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 13:01:52.718077  270927 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 13:01:52.718087  270927 command_runner.go:130] > # its default mounts from the following two files:
	I0729 13:01:52.718095  270927 command_runner.go:130] > #
	I0729 13:01:52.718104  270927 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 13:01:52.718116  270927 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 13:01:52.718125  270927 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 13:01:52.718133  270927 command_runner.go:130] > #
	I0729 13:01:52.718142  270927 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 13:01:52.718156  270927 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 13:01:52.718169  270927 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 13:01:52.718180  270927 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 13:01:52.718187  270927 command_runner.go:130] > #
	I0729 13:01:52.718197  270927 command_runner.go:130] > # default_mounts_file = ""
	I0729 13:01:52.718209  270927 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 13:01:52.718221  270927 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 13:01:52.718229  270927 command_runner.go:130] > pids_limit = 1024
	I0729 13:01:52.718242  270927 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 13:01:52.718253  270927 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 13:01:52.718266  270927 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 13:01:52.718281  270927 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 13:01:52.718290  270927 command_runner.go:130] > # log_size_max = -1
	I0729 13:01:52.718304  270927 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 13:01:52.718313  270927 command_runner.go:130] > # log_to_journald = false
	I0729 13:01:52.718329  270927 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 13:01:52.718340  270927 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 13:01:52.718353  270927 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 13:01:52.718365  270927 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 13:01:52.718376  270927 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 13:01:52.718385  270927 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 13:01:52.718399  270927 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 13:01:52.718408  270927 command_runner.go:130] > # read_only = false
	I0729 13:01:52.718417  270927 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 13:01:52.718432  270927 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 13:01:52.718439  270927 command_runner.go:130] > # live configuration reload.
	I0729 13:01:52.718445  270927 command_runner.go:130] > # log_level = "info"
	I0729 13:01:52.718453  270927 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 13:01:52.718462  270927 command_runner.go:130] > # This option supports live configuration reload.
	I0729 13:01:52.718467  270927 command_runner.go:130] > # log_filter = ""
	I0729 13:01:52.718480  270927 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 13:01:52.718489  270927 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 13:01:52.718498  270927 command_runner.go:130] > # separated by comma.
	I0729 13:01:52.718508  270927 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 13:01:52.718517  270927 command_runner.go:130] > # uid_mappings = ""
	I0729 13:01:52.718524  270927 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 13:01:52.718535  270927 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 13:01:52.718541  270927 command_runner.go:130] > # separated by comma.
	I0729 13:01:52.718551  270927 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 13:01:52.718559  270927 command_runner.go:130] > # gid_mappings = ""
	I0729 13:01:52.718568  270927 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 13:01:52.718579  270927 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 13:01:52.718589  270927 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 13:01:52.718602  270927 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 13:01:52.718611  270927 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 13:01:52.718620  270927 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 13:01:52.718631  270927 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 13:01:52.718642  270927 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 13:01:52.718656  270927 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 13:01:52.718665  270927 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 13:01:52.718675  270927 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 13:01:52.718687  270927 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 13:01:52.718701  270927 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 13:01:52.718711  270927 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 13:01:52.718719  270927 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 13:01:52.718735  270927 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 13:01:52.718745  270927 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 13:01:52.718755  270927 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 13:01:52.718768  270927 command_runner.go:130] > drop_infra_ctr = false
	I0729 13:01:52.718779  270927 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 13:01:52.718795  270927 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 13:01:52.718808  270927 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 13:01:52.718818  270927 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 13:01:52.718829  270927 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 13:01:52.718841  270927 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 13:01:52.718849  270927 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 13:01:52.718860  270927 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 13:01:52.718866  270927 command_runner.go:130] > # shared_cpuset = ""
	I0729 13:01:52.718879  270927 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 13:01:52.718889  270927 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 13:01:52.718899  270927 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 13:01:52.718910  270927 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 13:01:52.718919  270927 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 13:01:52.718933  270927 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 13:01:52.718965  270927 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 13:01:52.718973  270927 command_runner.go:130] > # enable_criu_support = false
	I0729 13:01:52.718980  270927 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 13:01:52.718990  270927 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 13:01:52.719008  270927 command_runner.go:130] > # enable_pod_events = false
	I0729 13:01:52.719020  270927 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 13:01:52.719029  270927 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 13:01:52.719041  270927 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 13:01:52.719049  270927 command_runner.go:130] > # default_runtime = "runc"
	I0729 13:01:52.719058  270927 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 13:01:52.719073  270927 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 13:01:52.719089  270927 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 13:01:52.719100  270927 command_runner.go:130] > # creation as a file is not desired either.
	I0729 13:01:52.719113  270927 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 13:01:52.719124  270927 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 13:01:52.719132  270927 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 13:01:52.719139  270927 command_runner.go:130] > # ]
	I0729 13:01:52.719147  270927 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 13:01:52.719158  270927 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 13:01:52.719172  270927 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 13:01:52.719183  270927 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 13:01:52.719190  270927 command_runner.go:130] > #
	I0729 13:01:52.719198  270927 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 13:01:52.719208  270927 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 13:01:52.719234  270927 command_runner.go:130] > # runtime_type = "oci"
	I0729 13:01:52.719244  270927 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 13:01:52.719251  270927 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 13:01:52.719260  270927 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 13:01:52.719270  270927 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 13:01:52.719278  270927 command_runner.go:130] > # monitor_env = []
	I0729 13:01:52.719289  270927 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 13:01:52.719299  270927 command_runner.go:130] > # allowed_annotations = []
	I0729 13:01:52.719309  270927 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 13:01:52.719317  270927 command_runner.go:130] > # Where:
	I0729 13:01:52.719325  270927 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 13:01:52.719338  270927 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 13:01:52.719347  270927 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 13:01:52.719359  270927 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 13:01:52.719368  270927 command_runner.go:130] > #   in $PATH.
	I0729 13:01:52.719377  270927 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 13:01:52.719387  270927 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 13:01:52.719400  270927 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 13:01:52.719407  270927 command_runner.go:130] > #   state.
	I0729 13:01:52.719419  270927 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 13:01:52.719430  270927 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 13:01:52.719440  270927 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 13:01:52.719450  270927 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 13:01:52.719463  270927 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 13:01:52.719475  270927 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 13:01:52.719485  270927 command_runner.go:130] > #   The currently recognized values are:
	I0729 13:01:52.719498  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 13:01:52.719511  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 13:01:52.719523  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 13:01:52.719535  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 13:01:52.719548  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 13:01:52.719563  270927 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 13:01:52.719575  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 13:01:52.719587  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 13:01:52.719600  270927 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 13:01:52.719612  270927 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 13:01:52.719620  270927 command_runner.go:130] > #   deprecated option "conmon".
	I0729 13:01:52.719629  270927 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 13:01:52.719636  270927 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 13:01:52.719646  270927 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 13:01:52.719653  270927 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 13:01:52.719659  270927 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 13:01:52.719667  270927 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 13:01:52.719673  270927 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 13:01:52.719680  270927 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 13:01:52.719683  270927 command_runner.go:130] > #
	I0729 13:01:52.719688  270927 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 13:01:52.719693  270927 command_runner.go:130] > #
	I0729 13:01:52.719699  270927 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 13:01:52.719709  270927 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 13:01:52.719714  270927 command_runner.go:130] > #
	I0729 13:01:52.719720  270927 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 13:01:52.719727  270927 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 13:01:52.719733  270927 command_runner.go:130] > #
	I0729 13:01:52.719739  270927 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 13:01:52.719744  270927 command_runner.go:130] > # feature.
	I0729 13:01:52.719748  270927 command_runner.go:130] > #
	I0729 13:01:52.719757  270927 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 13:01:52.719765  270927 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 13:01:52.719771  270927 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 13:01:52.719779  270927 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 13:01:52.719785  270927 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 13:01:52.719791  270927 command_runner.go:130] > #
	I0729 13:01:52.719797  270927 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 13:01:52.719805  270927 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 13:01:52.719811  270927 command_runner.go:130] > #
	I0729 13:01:52.719817  270927 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 13:01:52.719826  270927 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 13:01:52.719832  270927 command_runner.go:130] > #
	I0729 13:01:52.719838  270927 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 13:01:52.719845  270927 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 13:01:52.719851  270927 command_runner.go:130] > # limitation.
	I0729 13:01:52.719855  270927 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 13:01:52.719861  270927 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 13:01:52.719865  270927 command_runner.go:130] > runtime_type = "oci"
	I0729 13:01:52.719869  270927 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 13:01:52.719876  270927 command_runner.go:130] > runtime_config_path = ""
	I0729 13:01:52.719882  270927 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 13:01:52.719886  270927 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 13:01:52.719892  270927 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 13:01:52.719896  270927 command_runner.go:130] > monitor_env = [
	I0729 13:01:52.719904  270927 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 13:01:52.719909  270927 command_runner.go:130] > ]
	I0729 13:01:52.719913  270927 command_runner.go:130] > privileged_without_host_devices = false
	I0729 13:01:52.719921  270927 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 13:01:52.719929  270927 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 13:01:52.719935  270927 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 13:01:52.719944  270927 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 13:01:52.719954  270927 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 13:01:52.719959  270927 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 13:01:52.719969  270927 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 13:01:52.719979  270927 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 13:01:52.719986  270927 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 13:01:52.719996  270927 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 13:01:52.720006  270927 command_runner.go:130] > # Example:
	I0729 13:01:52.720010  270927 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 13:01:52.720015  270927 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 13:01:52.720019  270927 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 13:01:52.720023  270927 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 13:01:52.720026  270927 command_runner.go:130] > # cpuset = 0
	I0729 13:01:52.720030  270927 command_runner.go:130] > # cpushares = "0-1"
	I0729 13:01:52.720033  270927 command_runner.go:130] > # Where:
	I0729 13:01:52.720038  270927 command_runner.go:130] > # The workload name is workload-type.
	I0729 13:01:52.720045  270927 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 13:01:52.720050  270927 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 13:01:52.720055  270927 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 13:01:52.720062  270927 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 13:01:52.720067  270927 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 13:01:52.720072  270927 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 13:01:52.720078  270927 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 13:01:52.720082  270927 command_runner.go:130] > # Default value is set to true
	I0729 13:01:52.720086  270927 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 13:01:52.720091  270927 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 13:01:52.720095  270927 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 13:01:52.720099  270927 command_runner.go:130] > # Default value is set to 'false'
	I0729 13:01:52.720103  270927 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 13:01:52.720109  270927 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 13:01:52.720112  270927 command_runner.go:130] > #
	I0729 13:01:52.720117  270927 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 13:01:52.720124  270927 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 13:01:52.720130  270927 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 13:01:52.720136  270927 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 13:01:52.720141  270927 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 13:01:52.720144  270927 command_runner.go:130] > [crio.image]
	I0729 13:01:52.720149  270927 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 13:01:52.720155  270927 command_runner.go:130] > # default_transport = "docker://"
	I0729 13:01:52.720160  270927 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 13:01:52.720166  270927 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 13:01:52.720169  270927 command_runner.go:130] > # global_auth_file = ""
	I0729 13:01:52.720174  270927 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 13:01:52.720178  270927 command_runner.go:130] > # This option supports live configuration reload.
	I0729 13:01:52.720182  270927 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 13:01:52.720188  270927 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 13:01:52.720197  270927 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 13:01:52.720201  270927 command_runner.go:130] > # This option supports live configuration reload.
	I0729 13:01:52.720205  270927 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 13:01:52.720211  270927 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 13:01:52.720219  270927 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 13:01:52.720225  270927 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 13:01:52.720234  270927 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 13:01:52.720240  270927 command_runner.go:130] > # pause_command = "/pause"
	I0729 13:01:52.720246  270927 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 13:01:52.720253  270927 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 13:01:52.720261  270927 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 13:01:52.720267  270927 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 13:01:52.720274  270927 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 13:01:52.720280  270927 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 13:01:52.720286  270927 command_runner.go:130] > # pinned_images = [
	I0729 13:01:52.720290  270927 command_runner.go:130] > # ]
	I0729 13:01:52.720297  270927 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 13:01:52.720305  270927 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 13:01:52.720313  270927 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 13:01:52.720321  270927 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 13:01:52.720327  270927 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 13:01:52.720333  270927 command_runner.go:130] > # signature_policy = ""
	I0729 13:01:52.720339  270927 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 13:01:52.720347  270927 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 13:01:52.720355  270927 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 13:01:52.720361  270927 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 13:01:52.720369  270927 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 13:01:52.720373  270927 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 13:01:52.720381  270927 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 13:01:52.720389  270927 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 13:01:52.720393  270927 command_runner.go:130] > # changing them here.
	I0729 13:01:52.720399  270927 command_runner.go:130] > # insecure_registries = [
	I0729 13:01:52.720402  270927 command_runner.go:130] > # ]
	I0729 13:01:52.720408  270927 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 13:01:52.720415  270927 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 13:01:52.720419  270927 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 13:01:52.720426  270927 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 13:01:52.720430  270927 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 13:01:52.720438  270927 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 13:01:52.720446  270927 command_runner.go:130] > # CNI plugins.
	I0729 13:01:52.720453  270927 command_runner.go:130] > [crio.network]
	I0729 13:01:52.720465  270927 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 13:01:52.720477  270927 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 13:01:52.720486  270927 command_runner.go:130] > # cni_default_network = ""
	I0729 13:01:52.720496  270927 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 13:01:52.720506  270927 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 13:01:52.720518  270927 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 13:01:52.720527  270927 command_runner.go:130] > # plugin_dirs = [
	I0729 13:01:52.720533  270927 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 13:01:52.720541  270927 command_runner.go:130] > # ]
	I0729 13:01:52.720549  270927 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 13:01:52.720558  270927 command_runner.go:130] > [crio.metrics]
	I0729 13:01:52.720565  270927 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 13:01:52.720574  270927 command_runner.go:130] > enable_metrics = true
	I0729 13:01:52.720582  270927 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 13:01:52.720591  270927 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 13:01:52.720602  270927 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 13:01:52.720615  270927 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 13:01:52.720626  270927 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 13:01:52.720635  270927 command_runner.go:130] > # metrics_collectors = [
	I0729 13:01:52.720644  270927 command_runner.go:130] > # 	"operations",
	I0729 13:01:52.720652  270927 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 13:01:52.720662  270927 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 13:01:52.720668  270927 command_runner.go:130] > # 	"operations_errors",
	I0729 13:01:52.720677  270927 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 13:01:52.720682  270927 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 13:01:52.720689  270927 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 13:01:52.720693  270927 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 13:01:52.720700  270927 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 13:01:52.720704  270927 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 13:01:52.720710  270927 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 13:01:52.720715  270927 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 13:01:52.720721  270927 command_runner.go:130] > # 	"containers_oom_total",
	I0729 13:01:52.720727  270927 command_runner.go:130] > # 	"containers_oom",
	I0729 13:01:52.720733  270927 command_runner.go:130] > # 	"processes_defunct",
	I0729 13:01:52.720737  270927 command_runner.go:130] > # 	"operations_total",
	I0729 13:01:52.720745  270927 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 13:01:52.720753  270927 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 13:01:52.720761  270927 command_runner.go:130] > # 	"operations_errors_total",
	I0729 13:01:52.720766  270927 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 13:01:52.720773  270927 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 13:01:52.720779  270927 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 13:01:52.720789  270927 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 13:01:52.720808  270927 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 13:01:52.720818  270927 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 13:01:52.720825  270927 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 13:01:52.720834  270927 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 13:01:52.720841  270927 command_runner.go:130] > # ]
	I0729 13:01:52.720851  270927 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 13:01:52.720861  270927 command_runner.go:130] > # metrics_port = 9090
	I0729 13:01:52.720868  270927 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 13:01:52.720877  270927 command_runner.go:130] > # metrics_socket = ""
	I0729 13:01:52.720885  270927 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 13:01:52.720897  270927 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 13:01:52.720908  270927 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 13:01:52.720915  270927 command_runner.go:130] > # certificate on any modification event.
	I0729 13:01:52.720919  270927 command_runner.go:130] > # metrics_cert = ""
	I0729 13:01:52.720926  270927 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 13:01:52.720931  270927 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 13:01:52.720937  270927 command_runner.go:130] > # metrics_key = ""
	I0729 13:01:52.720943  270927 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 13:01:52.720949  270927 command_runner.go:130] > [crio.tracing]
	I0729 13:01:52.720954  270927 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 13:01:52.720961  270927 command_runner.go:130] > # enable_tracing = false
	I0729 13:01:52.720966  270927 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 13:01:52.720972  270927 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 13:01:52.720978  270927 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 13:01:52.720985  270927 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 13:01:52.720989  270927 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 13:01:52.720994  270927 command_runner.go:130] > [crio.nri]
	I0729 13:01:52.721002  270927 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 13:01:52.721008  270927 command_runner.go:130] > # enable_nri = false
	I0729 13:01:52.721012  270927 command_runner.go:130] > # NRI socket to listen on.
	I0729 13:01:52.721017  270927 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 13:01:52.721022  270927 command_runner.go:130] > # NRI plugin directory to use.
	I0729 13:01:52.721029  270927 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 13:01:52.721034  270927 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 13:01:52.721040  270927 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 13:01:52.721045  270927 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 13:01:52.721050  270927 command_runner.go:130] > # nri_disable_connections = false
	I0729 13:01:52.721057  270927 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 13:01:52.721062  270927 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 13:01:52.721069  270927 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 13:01:52.721073  270927 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 13:01:52.721081  270927 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 13:01:52.721085  270927 command_runner.go:130] > [crio.stats]
	I0729 13:01:52.721092  270927 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 13:01:52.721098  270927 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 13:01:52.721104  270927 command_runner.go:130] > # stats_collection_period = 0
	I0729 13:01:52.721124  270927 command_runner.go:130] ! time="2024-07-29 13:01:52.689369031Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 13:01:52.721137  270927 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 13:01:52.721254  270927 cni.go:84] Creating CNI manager for ""
	I0729 13:01:52.721264  270927 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 13:01:52.721274  270927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:01:52.721295  270927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-786745 NodeName:multinode-786745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:01:52.721443  270927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-786745"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:01:52.721516  270927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:01:52.731983  270927 command_runner.go:130] > kubeadm
	I0729 13:01:52.732010  270927 command_runner.go:130] > kubectl
	I0729 13:01:52.732017  270927 command_runner.go:130] > kubelet
	I0729 13:01:52.732049  270927 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:01:52.732097  270927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:01:52.742110  270927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0729 13:01:52.758435  270927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:01:52.774418  270927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 13:01:52.791581  270927 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0729 13:01:52.795191  270927 command_runner.go:130] > 192.168.39.10	control-plane.minikube.internal
	I0729 13:01:52.795325  270927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:01:52.929313  270927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:01:52.944697  270927 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745 for IP: 192.168.39.10
	I0729 13:01:52.944723  270927 certs.go:194] generating shared ca certs ...
	I0729 13:01:52.944745  270927 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:01:52.944941  270927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:01:52.945007  270927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:01:52.945023  270927 certs.go:256] generating profile certs ...
	I0729 13:01:52.945113  270927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/client.key
	I0729 13:01:52.945204  270927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/apiserver.key.fa4f91be
	I0729 13:01:52.945261  270927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/proxy-client.key
	I0729 13:01:52.945279  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 13:01:52.945301  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 13:01:52.945320  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 13:01:52.945337  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 13:01:52.945355  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 13:01:52.945375  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 13:01:52.945392  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 13:01:52.945410  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 13:01:52.945476  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:01:52.945514  270927 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:01:52.945529  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:01:52.945561  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:01:52.945592  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:01:52.945716  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:01:52.945832  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:01:52.945879  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 13:01:52.945901  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:01:52.945920  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 13:01:52.946585  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:01:52.970344  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:01:52.994188  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:01:53.016466  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:01:53.039443  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:01:53.062299  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:01:53.085162  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:01:53.107944  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:01:53.130742  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:01:53.152815  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:01:53.175499  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:01:53.198023  270927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:01:53.214307  270927 ssh_runner.go:195] Run: openssl version
	I0729 13:01:53.219968  270927 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 13:01:53.220237  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:01:53.232158  270927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:01:53.236511  270927 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:01:53.236768  270927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:01:53.236837  270927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:01:53.242895  270927 command_runner.go:130] > 3ec20f2e
	I0729 13:01:53.243075  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:01:53.253132  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:01:53.264809  270927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:01:53.269110  270927 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:01:53.269279  270927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:01:53.269317  270927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:01:53.275002  270927 command_runner.go:130] > b5213941
	I0729 13:01:53.275054  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:01:53.285881  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:01:53.298389  270927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:01:53.303093  270927 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:01:53.303237  270927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:01:53.303281  270927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:01:53.309019  270927 command_runner.go:130] > 51391683
	I0729 13:01:53.309204  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:01:53.321115  270927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:01:53.325819  270927 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:01:53.325836  270927 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 13:01:53.325843  270927 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0729 13:01:53.325852  270927 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 13:01:53.325868  270927 command_runner.go:130] > Access: 2024-07-29 12:54:52.254613196 +0000
	I0729 13:01:53.325875  270927 command_runner.go:130] > Modify: 2024-07-29 12:54:52.254613196 +0000
	I0729 13:01:53.325884  270927 command_runner.go:130] > Change: 2024-07-29 12:54:52.254613196 +0000
	I0729 13:01:53.325892  270927 command_runner.go:130] >  Birth: 2024-07-29 12:54:52.254613196 +0000
	I0729 13:01:53.325958  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:01:53.331939  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.332069  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:01:53.337425  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.337660  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:01:53.343130  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.343292  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:01:53.348663  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.348836  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:01:53.354025  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.354219  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:01:53.359894  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.360124  270927 kubeadm.go:392] StartCluster: {Name:multinode-786745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-786745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:01:53.360231  270927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:01:53.360290  270927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:01:53.406322  270927 command_runner.go:130] > 30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4
	I0729 13:01:53.406426  270927 command_runner.go:130] > 0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd
	I0729 13:01:53.406447  270927 command_runner.go:130] > 45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac
	I0729 13:01:53.406461  270927 command_runner.go:130] > 5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937
	I0729 13:01:53.406476  270927 command_runner.go:130] > ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24
	I0729 13:01:53.406487  270927 command_runner.go:130] > ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367
	I0729 13:01:53.406516  270927 command_runner.go:130] > a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae
	I0729 13:01:53.406535  270927 command_runner.go:130] > 294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb
	I0729 13:01:53.408007  270927 cri.go:89] found id: "30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4"
	I0729 13:01:53.408024  270927 cri.go:89] found id: "0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd"
	I0729 13:01:53.408028  270927 cri.go:89] found id: "45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac"
	I0729 13:01:53.408031  270927 cri.go:89] found id: "5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937"
	I0729 13:01:53.408034  270927 cri.go:89] found id: "ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24"
	I0729 13:01:53.408037  270927 cri.go:89] found id: "ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367"
	I0729 13:01:53.408039  270927 cri.go:89] found id: "a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae"
	I0729 13:01:53.408042  270927 cri.go:89] found id: "294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb"
	I0729 13:01:53.408045  270927 cri.go:89] found id: ""
	I0729 13:01:53.408091  270927 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.727946657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258222727923296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e88dc2c4-6214-42b3-9b2e-ab35bef19e1e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.728474151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be59f552-1e85-46d2-b139-bd7b7b55ec01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.728543834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be59f552-1e85-46d2-b139-bd7b7b55ec01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.728937597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e28cd1159490fc75c01648b6eae9216c75633a5c701d3d77024939b7c8240b1,PodSandboxId:74c91730aa10b55879b750f4ecb11dc9efabce469484a4de90d8ac5c17fcf412,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722258153481727119,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c26de213350ce97698c98f95153db9c8d52590d17fb062a1db6bedab8dc6a1c5,PodSandboxId:4e54fab8b623ed3b9bcb94c99cb10d22c8618e1c8c554132cf06c99b105505c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722258119955117994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4248afd116e8a5a6eb057e479638bb0622fe0065fbea601b4bd5ccca32a6b5fa,PodSandboxId:eefb4a0f49618a2c466e2ed90b7b564044a6b4b58b58fc87822faca89809a9df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258119894842578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169be91b864f4745a7086cbcfbd9f9370f3e2ebc05c21c569b7b8b28bf84c437,PodSandboxId:9eba997524f9863c8a2b9baf6b1e31761f414b96249cdb4baffb63d6e36c884c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258119847766821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]
string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887d0a602ebd5d66feca67342705a56d5def822e1e6fa40fd2a848c4c0c5c74c,PodSandboxId:d31205dfe2fe1e38c7c9b308ca8cc0ee2a2e9c75e552e5f2962b1ee1ccf88ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258119731052380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211077d6da221155b4786e9764d1afbe85435dcbc72bc299c48a89fcdd1834ed,PodSandboxId:fb7f85c82c68e52a57c967eb74339924fc2103a9ca10e9db3c84e9480368da59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722258115975387058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8fccb33964b6ea7f08c97c611ac1ba718022ccb8a46960c0e5bb26296b20a2,PodSandboxId:965a9c7af942e749986e39bf92989c092abeb825281088be1c74ed636aed2190,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258115956480402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cdfb5260fd6e6f2334d8cd3862186c9ea13b8641d32a6881190957442937f51,PodSandboxId:301d79b34dfa9980fd827a25a30e9b13169aa794770f60a8ce9020d8fb2f40ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722258115910274733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b482750e732af0f3bf857c13214a0d108d5793752a016c9c41d7c302a384ab,PodSandboxId:b7d5979d0ef19135ae67918afcd24acdba8fc64a12b57c66dfea58ddc0a50cb4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722258115861013206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6757ec4b5063f592b5ae623c317924f71cdaa2470b792d68365c215e53cea5,PodSandboxId:dfe50b635a76114ceb8ff141f4c7a195427d6f700e1369ff6f55954075ce19c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722257791988475390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4,PodSandboxId:6a4e7f0d2c68cb7d88248b089fda6115a1e13887a2b0b9ada4996ac6daad5830,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722257732868863687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd,PodSandboxId:20383bc60250627c4e427be9c5d3b2b89552da6301ad22a539e1b86dc2803514,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257732860743216,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.kubernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac,PodSandboxId:429ba2e901b85ee2dfecfdc21f2b87a2ad8e5d13e6d767a1785285bb4de550ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722257721151333061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937,PodSandboxId:e5cecfcbfdb2d441dba7ce4f34474a3a7807eda1be6b11befd85bf860023ac51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722257716216024056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24,PodSandboxId:6f176de5ab9356af029772e122a16737b0343c120600aadf2e83e748ab0da84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257695859781537,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367,PodSandboxId:bbc4a896a313df06c7edf92f4c415d9cb789a6abdbb5a3a37f016843d02e3c6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722257695832359038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae,PodSandboxId:dfc6102a64d120daa9ad8cddc916e1e17aabb3fdaadccb3473e7647ba2c95c82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722257695807785654,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,
},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb,PodSandboxId:18f0eeff23b41a8b11385c4d55fa4318b351ec5266175a255f7c4f96591bb746,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722257695769019622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be59f552-1e85-46d2-b139-bd7b7b55ec01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.773185102Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b736d9e1-446b-4d07-bda5-7786757b952f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.773279114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b736d9e1-446b-4d07-bda5-7786757b952f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.774785462Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e4bd05e-78c6-4549-a754-9c4e31a4ce98 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.775205140Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258222775183570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e4bd05e-78c6-4549-a754-9c4e31a4ce98 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.776199298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=234d74ee-8bfe-408b-add4-b22a9e020409 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.776375923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=234d74ee-8bfe-408b-add4-b22a9e020409 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.776805556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e28cd1159490fc75c01648b6eae9216c75633a5c701d3d77024939b7c8240b1,PodSandboxId:74c91730aa10b55879b750f4ecb11dc9efabce469484a4de90d8ac5c17fcf412,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722258153481727119,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c26de213350ce97698c98f95153db9c8d52590d17fb062a1db6bedab8dc6a1c5,PodSandboxId:4e54fab8b623ed3b9bcb94c99cb10d22c8618e1c8c554132cf06c99b105505c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722258119955117994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4248afd116e8a5a6eb057e479638bb0622fe0065fbea601b4bd5ccca32a6b5fa,PodSandboxId:eefb4a0f49618a2c466e2ed90b7b564044a6b4b58b58fc87822faca89809a9df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258119894842578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169be91b864f4745a7086cbcfbd9f9370f3e2ebc05c21c569b7b8b28bf84c437,PodSandboxId:9eba997524f9863c8a2b9baf6b1e31761f414b96249cdb4baffb63d6e36c884c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258119847766821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]
string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887d0a602ebd5d66feca67342705a56d5def822e1e6fa40fd2a848c4c0c5c74c,PodSandboxId:d31205dfe2fe1e38c7c9b308ca8cc0ee2a2e9c75e552e5f2962b1ee1ccf88ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258119731052380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211077d6da221155b4786e9764d1afbe85435dcbc72bc299c48a89fcdd1834ed,PodSandboxId:fb7f85c82c68e52a57c967eb74339924fc2103a9ca10e9db3c84e9480368da59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722258115975387058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8fccb33964b6ea7f08c97c611ac1ba718022ccb8a46960c0e5bb26296b20a2,PodSandboxId:965a9c7af942e749986e39bf92989c092abeb825281088be1c74ed636aed2190,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258115956480402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cdfb5260fd6e6f2334d8cd3862186c9ea13b8641d32a6881190957442937f51,PodSandboxId:301d79b34dfa9980fd827a25a30e9b13169aa794770f60a8ce9020d8fb2f40ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722258115910274733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b482750e732af0f3bf857c13214a0d108d5793752a016c9c41d7c302a384ab,PodSandboxId:b7d5979d0ef19135ae67918afcd24acdba8fc64a12b57c66dfea58ddc0a50cb4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722258115861013206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6757ec4b5063f592b5ae623c317924f71cdaa2470b792d68365c215e53cea5,PodSandboxId:dfe50b635a76114ceb8ff141f4c7a195427d6f700e1369ff6f55954075ce19c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722257791988475390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4,PodSandboxId:6a4e7f0d2c68cb7d88248b089fda6115a1e13887a2b0b9ada4996ac6daad5830,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722257732868863687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd,PodSandboxId:20383bc60250627c4e427be9c5d3b2b89552da6301ad22a539e1b86dc2803514,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257732860743216,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.kubernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac,PodSandboxId:429ba2e901b85ee2dfecfdc21f2b87a2ad8e5d13e6d767a1785285bb4de550ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722257721151333061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937,PodSandboxId:e5cecfcbfdb2d441dba7ce4f34474a3a7807eda1be6b11befd85bf860023ac51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722257716216024056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24,PodSandboxId:6f176de5ab9356af029772e122a16737b0343c120600aadf2e83e748ab0da84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257695859781537,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367,PodSandboxId:bbc4a896a313df06c7edf92f4c415d9cb789a6abdbb5a3a37f016843d02e3c6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722257695832359038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae,PodSandboxId:dfc6102a64d120daa9ad8cddc916e1e17aabb3fdaadccb3473e7647ba2c95c82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722257695807785654,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,
},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb,PodSandboxId:18f0eeff23b41a8b11385c4d55fa4318b351ec5266175a255f7c4f96591bb746,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722257695769019622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=234d74ee-8bfe-408b-add4-b22a9e020409 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.818752201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3e3b7c2-8830-4a0c-bd72-a64403f558c9 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.818959235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3e3b7c2-8830-4a0c-bd72-a64403f558c9 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.820080969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6da12c14-dd88-4051-aecd-0f19d0a5668e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.820536904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258222820515701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6da12c14-dd88-4051-aecd-0f19d0a5668e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.820985164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e99a7d4-1bf4-4317-abc8-5926c5f367ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.821058822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e99a7d4-1bf4-4317-abc8-5926c5f367ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.821432640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e28cd1159490fc75c01648b6eae9216c75633a5c701d3d77024939b7c8240b1,PodSandboxId:74c91730aa10b55879b750f4ecb11dc9efabce469484a4de90d8ac5c17fcf412,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722258153481727119,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c26de213350ce97698c98f95153db9c8d52590d17fb062a1db6bedab8dc6a1c5,PodSandboxId:4e54fab8b623ed3b9bcb94c99cb10d22c8618e1c8c554132cf06c99b105505c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722258119955117994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4248afd116e8a5a6eb057e479638bb0622fe0065fbea601b4bd5ccca32a6b5fa,PodSandboxId:eefb4a0f49618a2c466e2ed90b7b564044a6b4b58b58fc87822faca89809a9df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258119894842578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169be91b864f4745a7086cbcfbd9f9370f3e2ebc05c21c569b7b8b28bf84c437,PodSandboxId:9eba997524f9863c8a2b9baf6b1e31761f414b96249cdb4baffb63d6e36c884c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258119847766821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]
string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887d0a602ebd5d66feca67342705a56d5def822e1e6fa40fd2a848c4c0c5c74c,PodSandboxId:d31205dfe2fe1e38c7c9b308ca8cc0ee2a2e9c75e552e5f2962b1ee1ccf88ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258119731052380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211077d6da221155b4786e9764d1afbe85435dcbc72bc299c48a89fcdd1834ed,PodSandboxId:fb7f85c82c68e52a57c967eb74339924fc2103a9ca10e9db3c84e9480368da59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722258115975387058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8fccb33964b6ea7f08c97c611ac1ba718022ccb8a46960c0e5bb26296b20a2,PodSandboxId:965a9c7af942e749986e39bf92989c092abeb825281088be1c74ed636aed2190,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258115956480402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cdfb5260fd6e6f2334d8cd3862186c9ea13b8641d32a6881190957442937f51,PodSandboxId:301d79b34dfa9980fd827a25a30e9b13169aa794770f60a8ce9020d8fb2f40ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722258115910274733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b482750e732af0f3bf857c13214a0d108d5793752a016c9c41d7c302a384ab,PodSandboxId:b7d5979d0ef19135ae67918afcd24acdba8fc64a12b57c66dfea58ddc0a50cb4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722258115861013206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6757ec4b5063f592b5ae623c317924f71cdaa2470b792d68365c215e53cea5,PodSandboxId:dfe50b635a76114ceb8ff141f4c7a195427d6f700e1369ff6f55954075ce19c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722257791988475390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4,PodSandboxId:6a4e7f0d2c68cb7d88248b089fda6115a1e13887a2b0b9ada4996ac6daad5830,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722257732868863687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd,PodSandboxId:20383bc60250627c4e427be9c5d3b2b89552da6301ad22a539e1b86dc2803514,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257732860743216,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.kubernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac,PodSandboxId:429ba2e901b85ee2dfecfdc21f2b87a2ad8e5d13e6d767a1785285bb4de550ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722257721151333061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937,PodSandboxId:e5cecfcbfdb2d441dba7ce4f34474a3a7807eda1be6b11befd85bf860023ac51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722257716216024056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24,PodSandboxId:6f176de5ab9356af029772e122a16737b0343c120600aadf2e83e748ab0da84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257695859781537,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367,PodSandboxId:bbc4a896a313df06c7edf92f4c415d9cb789a6abdbb5a3a37f016843d02e3c6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722257695832359038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae,PodSandboxId:dfc6102a64d120daa9ad8cddc916e1e17aabb3fdaadccb3473e7647ba2c95c82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722257695807785654,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,
},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb,PodSandboxId:18f0eeff23b41a8b11385c4d55fa4318b351ec5266175a255f7c4f96591bb746,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722257695769019622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e99a7d4-1bf4-4317-abc8-5926c5f367ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.868755744Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83a52486-3dcd-4d8c-ae3d-0d919e9690b7 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.868843679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83a52486-3dcd-4d8c-ae3d-0d919e9690b7 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.869824839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=432df5f4-42f7-4901-ad28-1607848cc0ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.870256494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258222870229256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=432df5f4-42f7-4901-ad28-1607848cc0ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.870848835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4da87a10-1fdd-47ae-a187-4dc5766bc80a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.870921092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4da87a10-1fdd-47ae-a187-4dc5766bc80a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:03:42 multinode-786745 crio[2878]: time="2024-07-29 13:03:42.871257534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e28cd1159490fc75c01648b6eae9216c75633a5c701d3d77024939b7c8240b1,PodSandboxId:74c91730aa10b55879b750f4ecb11dc9efabce469484a4de90d8ac5c17fcf412,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722258153481727119,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c26de213350ce97698c98f95153db9c8d52590d17fb062a1db6bedab8dc6a1c5,PodSandboxId:4e54fab8b623ed3b9bcb94c99cb10d22c8618e1c8c554132cf06c99b105505c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722258119955117994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4248afd116e8a5a6eb057e479638bb0622fe0065fbea601b4bd5ccca32a6b5fa,PodSandboxId:eefb4a0f49618a2c466e2ed90b7b564044a6b4b58b58fc87822faca89809a9df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258119894842578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169be91b864f4745a7086cbcfbd9f9370f3e2ebc05c21c569b7b8b28bf84c437,PodSandboxId:9eba997524f9863c8a2b9baf6b1e31761f414b96249cdb4baffb63d6e36c884c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258119847766821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]
string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887d0a602ebd5d66feca67342705a56d5def822e1e6fa40fd2a848c4c0c5c74c,PodSandboxId:d31205dfe2fe1e38c7c9b308ca8cc0ee2a2e9c75e552e5f2962b1ee1ccf88ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258119731052380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211077d6da221155b4786e9764d1afbe85435dcbc72bc299c48a89fcdd1834ed,PodSandboxId:fb7f85c82c68e52a57c967eb74339924fc2103a9ca10e9db3c84e9480368da59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722258115975387058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8fccb33964b6ea7f08c97c611ac1ba718022ccb8a46960c0e5bb26296b20a2,PodSandboxId:965a9c7af942e749986e39bf92989c092abeb825281088be1c74ed636aed2190,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258115956480402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cdfb5260fd6e6f2334d8cd3862186c9ea13b8641d32a6881190957442937f51,PodSandboxId:301d79b34dfa9980fd827a25a30e9b13169aa794770f60a8ce9020d8fb2f40ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722258115910274733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b482750e732af0f3bf857c13214a0d108d5793752a016c9c41d7c302a384ab,PodSandboxId:b7d5979d0ef19135ae67918afcd24acdba8fc64a12b57c66dfea58ddc0a50cb4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722258115861013206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6757ec4b5063f592b5ae623c317924f71cdaa2470b792d68365c215e53cea5,PodSandboxId:dfe50b635a76114ceb8ff141f4c7a195427d6f700e1369ff6f55954075ce19c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722257791988475390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4,PodSandboxId:6a4e7f0d2c68cb7d88248b089fda6115a1e13887a2b0b9ada4996ac6daad5830,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722257732868863687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd,PodSandboxId:20383bc60250627c4e427be9c5d3b2b89552da6301ad22a539e1b86dc2803514,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257732860743216,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.kubernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac,PodSandboxId:429ba2e901b85ee2dfecfdc21f2b87a2ad8e5d13e6d767a1785285bb4de550ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722257721151333061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937,PodSandboxId:e5cecfcbfdb2d441dba7ce4f34474a3a7807eda1be6b11befd85bf860023ac51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722257716216024056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24,PodSandboxId:6f176de5ab9356af029772e122a16737b0343c120600aadf2e83e748ab0da84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257695859781537,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367,PodSandboxId:bbc4a896a313df06c7edf92f4c415d9cb789a6abdbb5a3a37f016843d02e3c6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722257695832359038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae,PodSandboxId:dfc6102a64d120daa9ad8cddc916e1e17aabb3fdaadccb3473e7647ba2c95c82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722257695807785654,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,
},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb,PodSandboxId:18f0eeff23b41a8b11385c4d55fa4318b351ec5266175a255f7c4f96591bb746,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722257695769019622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4da87a10-1fdd-47ae-a187-4dc5766bc80a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8e28cd1159490       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   74c91730aa10b       busybox-fc5497c4f-cmdrr
	c26de213350ce       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   4e54fab8b623e       kindnet-wqdqp
	4248afd116e8a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   eefb4a0f49618       coredns-7db6d8ff4d-dbqpm
	169be91b864f4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   9eba997524f98       kube-proxy-x8bkl
	887d0a602ebd5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   d31205dfe2fe1       storage-provisioner
	211077d6da221       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   fb7f85c82c68e       etcd-multinode-786745
	0e8fccb33964b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   965a9c7af942e       kube-controller-manager-multinode-786745
	4cdfb5260fd6e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   301d79b34dfa9       kube-scheduler-multinode-786745
	30b482750e732       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   b7d5979d0ef19       kube-apiserver-multinode-786745
	2d6757ec4b506       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   dfe50b635a761       busybox-fc5497c4f-cmdrr
	30e55df3954ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   6a4e7f0d2c68c       coredns-7db6d8ff4d-dbqpm
	0bab72befc446       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   20383bc602506       storage-provisioner
	45f143f337828       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   429ba2e901b85       kindnet-wqdqp
	5fb78eca10406       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   e5cecfcbfdb2d       kube-proxy-x8bkl
	ff80069d557e8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   6f176de5ab935       kube-controller-manager-multinode-786745
	ad86c660fa96a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   bbc4a896a313d       kube-apiserver-multinode-786745
	a60d1b45bae61       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   dfc6102a64d12       etcd-multinode-786745
	294e87b4f8ed7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   18f0eeff23b41       kube-scheduler-multinode-786745
	
	
	==> coredns [30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4] <==
	[INFO] 10.244.1.2:52917 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001900471s
	[INFO] 10.244.1.2:51016 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091406s
	[INFO] 10.244.1.2:58676 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068587s
	[INFO] 10.244.1.2:34892 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001294122s
	[INFO] 10.244.1.2:39674 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130339s
	[INFO] 10.244.1.2:44075 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065372s
	[INFO] 10.244.1.2:40152 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065131s
	[INFO] 10.244.0.3:35613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009574s
	[INFO] 10.244.0.3:44974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098718s
	[INFO] 10.244.0.3:46544 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066254s
	[INFO] 10.244.0.3:38737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071673s
	[INFO] 10.244.1.2:42977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150139s
	[INFO] 10.244.1.2:45957 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017653s
	[INFO] 10.244.1.2:57806 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112372s
	[INFO] 10.244.1.2:59253 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108422s
	[INFO] 10.244.0.3:33134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114617s
	[INFO] 10.244.0.3:48962 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000079995s
	[INFO] 10.244.0.3:46956 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093325s
	[INFO] 10.244.0.3:35569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073722s
	[INFO] 10.244.1.2:42815 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128878s
	[INFO] 10.244.1.2:40198 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000079121s
	[INFO] 10.244.1.2:42658 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119204s
	[INFO] 10.244.1.2:42495 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069487s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4248afd116e8a5a6eb057e479638bb0622fe0065fbea601b4bd5ccca32a6b5fa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32935 - 61842 "HINFO IN 2841602399000264551.6073448778032366050. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01130462s
	
	
	==> describe nodes <==
	Name:               multinode-786745
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-786745
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=multinode-786745
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_55_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:54:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-786745
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:03:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:01:59 +0000   Mon, 29 Jul 2024 12:54:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:01:59 +0000   Mon, 29 Jul 2024 12:54:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:01:59 +0000   Mon, 29 Jul 2024 12:54:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:01:59 +0000   Mon, 29 Jul 2024 12:55:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    multinode-786745
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06e9fb264e204bb9b5a3154b75b88dcf
	  System UUID:                06e9fb26-4e20-4bb9-b5a3-154b75b88dcf
	  Boot ID:                    e0f0f261-ef7f-48ba-ac73-457378c5e0ba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cmdrr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 coredns-7db6d8ff4d-dbqpm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m28s
	  kube-system                 etcd-multinode-786745                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m42s
	  kube-system                 kindnet-wqdqp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m28s
	  kube-system                 kube-apiserver-multinode-786745             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m42s
	  kube-system                 kube-controller-manager-multinode-786745    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m42s
	  kube-system                 kube-proxy-x8bkl                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-scheduler-multinode-786745             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m42s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m26s                kube-proxy       
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  NodeHasSufficientPID     8m42s                kubelet          Node multinode-786745 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m42s                kubelet          Node multinode-786745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m42s                kubelet          Node multinode-786745 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m42s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m29s                node-controller  Node multinode-786745 event: Registered Node multinode-786745 in Controller
	  Normal  NodeReady                8m11s                kubelet          Node multinode-786745 status is now: NodeReady
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node multinode-786745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node multinode-786745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x7 over 108s)  kubelet          Node multinode-786745 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                  node-controller  Node multinode-786745 event: Registered Node multinode-786745 in Controller
	
	
	Name:               multinode-786745-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-786745-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=multinode-786745
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_02_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:02:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-786745-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:03:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:03:10 +0000   Mon, 29 Jul 2024 13:02:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:03:10 +0000   Mon, 29 Jul 2024 13:02:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:03:10 +0000   Mon, 29 Jul 2024 13:02:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:03:10 +0000   Mon, 29 Jul 2024 13:03:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    multinode-786745-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5de3da07252944eab9df4eb5bd47f786
	  System UUID:                5de3da07-2529-44ea-b9df-4eb5bd47f786
	  Boot ID:                    49ff78ff-6a59-463c-9087-5d32bd59581d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dbtnh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kindnet-knz5q              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m39s
	  kube-system                 kube-proxy-rhx5z           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m33s                  kube-proxy  
	  Normal  Starting                 59s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m40s (x2 over 7m40s)  kubelet     Node multinode-786745-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s (x2 over 7m40s)  kubelet     Node multinode-786745-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s (x2 over 7m40s)  kubelet     Node multinode-786745-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m18s                  kubelet     Node multinode-786745-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  64s (x2 over 64s)      kubelet     Node multinode-786745-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x2 over 64s)      kubelet     Node multinode-786745-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x2 over 64s)      kubelet     Node multinode-786745-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  64s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                43s                    kubelet     Node multinode-786745-m02 status is now: NodeReady
	
	
	Name:               multinode-786745-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-786745-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=multinode-786745
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_03_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:03:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-786745-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:03:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:03:39 +0000   Mon, 29 Jul 2024 13:03:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:03:39 +0000   Mon, 29 Jul 2024 13:03:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:03:39 +0000   Mon, 29 Jul 2024 13:03:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:03:39 +0000   Mon, 29 Jul 2024 13:03:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    multinode-786745-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5149c01ac4e44c2a9db415f24488886e
	  System UUID:                5149c01a-c4e4-4c2a-9db4-15f24488886e
	  Boot ID:                    4e5a19aa-c225-4f47-95ff-106d52a59c12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9rz9s       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m40s
	  kube-system                 kube-proxy-hsvcn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m34s                  kube-proxy  
	  Normal  Starting                 19s                    kube-proxy  
	  Normal  Starting                 5m44s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m40s (x2 over 6m40s)  kubelet     Node multinode-786745-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x2 over 6m40s)  kubelet     Node multinode-786745-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x2 over 6m40s)  kubelet     Node multinode-786745-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m19s                  kubelet     Node multinode-786745-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet     Node multinode-786745-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet     Node multinode-786745-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet     Node multinode-786745-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m29s                  kubelet     Node multinode-786745-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  24s (x2 over 24s)      kubelet     Node multinode-786745-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x2 over 24s)      kubelet     Node multinode-786745-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x2 over 24s)      kubelet     Node multinode-786745-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-786745-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.053288] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056040] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.180428] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.121356] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.287823] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.166258] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +3.891371] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.060675] kauditd_printk_skb: 158 callbacks suppressed
	[Jul29 12:55] systemd-fstab-generator[1267]: Ignoring "noauto" option for root device
	[  +0.085459] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.505205] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.566089] systemd-fstab-generator[1464]: Ignoring "noauto" option for root device
	[  +5.837393] kauditd_printk_skb: 51 callbacks suppressed
	[Jul29 12:56] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 13:01] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.142358] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.169574] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +0.136009] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.272914] systemd-fstab-generator[2865]: Ignoring "noauto" option for root device
	[  +0.700761] systemd-fstab-generator[2963]: Ignoring "noauto" option for root device
	[  +2.156807] systemd-fstab-generator[3089]: Ignoring "noauto" option for root device
	[  +4.658548] kauditd_printk_skb: 184 callbacks suppressed
	[Jul29 13:02] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.480227] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[ +18.292639] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [211077d6da221155b4786e9764d1afbe85435dcbc72bc299c48a89fcdd1834ed] <==
	{"level":"info","ts":"2024-07-29T13:01:56.496856Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:01:56.500082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e switched to configuration voters=(17911497232019635470)"}
	{"level":"info","ts":"2024-07-29T13:01:56.500176Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","added-peer-id":"f8926bd555ec3d0e","added-peer-peer-urls":["https://192.168.39.10:2380"]}
	{"level":"info","ts":"2024-07-29T13:01:56.500334Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:01:56.500381Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:01:56.520731Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-29T13:01:56.520767Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-29T13:01:56.52067Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T13:01:56.539845Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f8926bd555ec3d0e","initial-advertise-peer-urls":["https://192.168.39.10:2380"],"listen-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:01:56.539919Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:01:57.419038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T13:01:57.419101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T13:01:57.41914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgPreVoteResp from f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2024-07-29T13:01:57.419154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T13:01:57.41916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-07-29T13:01:57.419204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 3"}
	{"level":"info","ts":"2024-07-29T13:01:57.419237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-07-29T13:01:57.42441Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:multinode-786745 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:01:57.424463Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:01:57.424951Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:01:57.426576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2024-07-29T13:01:57.427685Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:01:57.427716Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T13:01:57.42848Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T13:02:43.728496Z","caller":"traceutil/trace.go:171","msg":"trace[1428817440] transaction","detail":"{read_only:false; response_revision:1049; number_of_response:1; }","duration":"185.035836ms","start":"2024-07-29T13:02:43.543431Z","end":"2024-07-29T13:02:43.728467Z","steps":["trace[1428817440] 'process raft request'  (duration: 184.777152ms)"],"step_count":1}
	
	
	==> etcd [a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae] <==
	{"level":"info","ts":"2024-07-29T12:54:57.232651Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T12:54:57.232704Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T12:54:57.246679Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:54:57.246801Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:54:57.246845Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-07-29T12:56:04.054138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.030838ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4399613308981066646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:3d0e90fe8ee5bb95>","response":"size:41"}
	{"level":"info","ts":"2024-07-29T12:56:04.054355Z","caller":"traceutil/trace.go:171","msg":"trace[1649649370] linearizableReadLoop","detail":"{readStateIndex:468; appliedIndex:466; }","duration":"129.380389ms","start":"2024-07-29T12:56:03.924951Z","end":"2024-07-29T12:56:04.054331Z","steps":["trace[1649649370] 'read index received'  (duration: 128.868276ms)","trace[1649649370] 'applied index is now lower than readState.Index'  (duration: 511.609µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:56:04.054526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.550704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-786745-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T12:56:04.054581Z","caller":"traceutil/trace.go:171","msg":"trace[754789716] range","detail":"{range_begin:/registry/minions/multinode-786745-m02; range_end:; response_count:1; response_revision:446; }","duration":"129.624459ms","start":"2024-07-29T12:56:03.924928Z","end":"2024-07-29T12:56:04.054552Z","steps":["trace[754789716] 'agreement among raft nodes before linearized reading'  (duration: 129.510328ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:56:04.055054Z","caller":"traceutil/trace.go:171","msg":"trace[783250578] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"171.961229ms","start":"2024-07-29T12:56:03.883086Z","end":"2024-07-29T12:56:04.055047Z","steps":["trace[783250578] 'process raft request'  (duration: 171.179812ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:57:03.721915Z","caller":"traceutil/trace.go:171","msg":"trace[449441776] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"178.919054ms","start":"2024-07-29T12:57:03.54295Z","end":"2024-07-29T12:57:03.721869Z","steps":["trace[449441776] 'process raft request'  (duration: 178.87269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:57:03.722282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.701373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-07-29T12:57:03.722354Z","caller":"traceutil/trace.go:171","msg":"trace[609165710] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:586; }","duration":"182.923051ms","start":"2024-07-29T12:57:03.539414Z","end":"2024-07-29T12:57:03.722337Z","steps":["trace[609165710] 'agreement among raft nodes before linearized reading'  (duration: 182.621361ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:57:03.72192Z","caller":"traceutil/trace.go:171","msg":"trace[959648998] linearizableReadLoop","detail":"{readStateIndex:625; appliedIndex:624; }","duration":"182.396018ms","start":"2024-07-29T12:57:03.539491Z","end":"2024-07-29T12:57:03.721887Z","steps":["trace[959648998] 'read index received'  (duration: 104.695368ms)","trace[959648998] 'applied index is now lower than readState.Index'  (duration: 77.697634ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:57:03.722555Z","caller":"traceutil/trace.go:171","msg":"trace[1237263912] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"242.819304ms","start":"2024-07-29T12:57:03.479724Z","end":"2024-07-29T12:57:03.722543Z","steps":["trace[1237263912] 'process raft request'  (duration: 164.329128ms)","trace[1237263912] 'compare'  (duration: 77.637709ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T13:00:20.075444Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T13:00:20.075566Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-786745","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	{"level":"warn","ts":"2024-07-29T13:00:20.075735Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T13:00:20.075834Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T13:00:20.160504Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T13:00:20.160546Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T13:00:20.16067Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f8926bd555ec3d0e","current-leader-member-id":"f8926bd555ec3d0e"}
	{"level":"info","ts":"2024-07-29T13:00:20.163379Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-29T13:00:20.163563Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-29T13:00:20.163651Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-786745","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	
	
	==> kernel <==
	 13:03:43 up 9 min,  0 users,  load average: 0.06, 0.13, 0.09
	Linux multinode-786745 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac] <==
	I0729 12:59:32.119324       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	I0729 12:59:42.121682       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 12:59:42.121803       1 main.go:299] handling current node
	I0729 12:59:42.121831       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 12:59:42.121849       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 12:59:42.121999       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 12:59:42.122020       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	I0729 12:59:52.116045       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 12:59:52.116232       1 main.go:299] handling current node
	I0729 12:59:52.116270       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 12:59:52.116278       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 12:59:52.116572       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 12:59:52.116666       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	I0729 13:00:02.120847       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:00:02.120946       1 main.go:299] handling current node
	I0729 13:00:02.120978       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:00:02.120985       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:00:02.121127       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 13:00:02.121134       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	I0729 13:00:12.119442       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:00:12.119546       1 main.go:299] handling current node
	I0729 13:00:12.119574       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:00:12.119690       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:00:12.119882       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 13:00:12.119907       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c26de213350ce97698c98f95153db9c8d52590d17fb062a1db6bedab8dc6a1c5] <==
	I0729 13:03:00.927373       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	I0729 13:03:10.926819       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:03:10.926967       1 main.go:299] handling current node
	I0729 13:03:10.927004       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:03:10.927023       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:03:10.927222       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 13:03:10.927246       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	I0729 13:03:20.926968       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:03:20.927028       1 main.go:299] handling current node
	I0729 13:03:20.927041       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:03:20.927047       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:03:20.927213       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 13:03:20.927236       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.2.0/24] 
	I0729 13:03:30.927119       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:03:30.927175       1 main.go:299] handling current node
	I0729 13:03:30.927190       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:03:30.927195       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:03:30.927328       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 13:03:30.927348       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.2.0/24] 
	I0729 13:03:40.927219       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:03:40.927269       1 main.go:299] handling current node
	I0729 13:03:40.927282       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:03:40.927288       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:03:40.927428       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 13:03:40.927448       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [30b482750e732af0f3bf857c13214a0d108d5793752a016c9c41d7c302a384ab] <==
	I0729 13:01:58.846394       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 13:01:58.846491       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 13:01:58.851628       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 13:01:58.851690       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 13:01:58.852586       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 13:01:58.852697       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 13:01:58.853568       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 13:01:58.854260       1 aggregator.go:165] initial CRD sync complete...
	I0729 13:01:58.854302       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 13:01:58.854309       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 13:01:58.854324       1 cache.go:39] Caches are synced for autoregister controller
	I0729 13:01:58.879721       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 13:01:58.883503       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 13:01:58.896215       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:01:58.896264       1 policy_source.go:224] refreshing policies
	E0729 13:01:58.898089       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 13:01:58.968553       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 13:01:59.760432       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 13:02:01.194950       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 13:02:01.316691       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 13:02:01.327542       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 13:02:01.421927       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 13:02:01.434466       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 13:02:11.620105       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 13:02:11.669883       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367] <==
	I0729 12:55:01.615880       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 12:55:14.520306       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 12:55:15.539086       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0729 12:56:33.520068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56658: use of closed network connection
	E0729 12:56:33.700447       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56666: use of closed network connection
	E0729 12:56:33.900929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56692: use of closed network connection
	E0729 12:56:34.079853       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56712: use of closed network connection
	E0729 12:56:34.252068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56734: use of closed network connection
	E0729 12:56:34.418970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56752: use of closed network connection
	E0729 12:56:34.701327       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56770: use of closed network connection
	E0729 12:56:34.865727       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56790: use of closed network connection
	E0729 12:56:35.035535       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56796: use of closed network connection
	E0729 12:56:35.211675       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56812: use of closed network connection
	I0729 13:00:20.080570       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0729 13:00:20.088250       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.088453       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093336       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093407       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093451       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093480       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093516       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093559       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093760       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.097759       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0729 13:00:20.097370       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [0e8fccb33964b6ea7f08c97c611ac1ba718022ccb8a46960c0e5bb26296b20a2] <==
	I0729 13:02:12.082982       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 13:02:12.120679       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 13:02:34.973721       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.190386ms"
	I0729 13:02:34.973813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.358µs"
	I0729 13:02:34.989847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.695943ms"
	I0729 13:02:34.989932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.902µs"
	I0729 13:02:39.458665       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-786745-m02\" does not exist"
	I0729 13:02:39.474115       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-786745-m02" podCIDRs=["10.244.1.0/24"]
	I0729 13:02:41.360161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.856µs"
	I0729 13:02:41.399693       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.262µs"
	I0729 13:02:41.411916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.988µs"
	I0729 13:02:41.436580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.105µs"
	I0729 13:02:41.445237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.069µs"
	I0729 13:02:41.449426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.871µs"
	I0729 13:02:42.059687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.693µs"
	I0729 13:03:00.484535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 13:03:00.503535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.134µs"
	I0729 13:03:00.518901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.755µs"
	I0729 13:03:04.759658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.854851ms"
	I0729 13:03:04.760094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.636µs"
	I0729 13:03:18.660877       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 13:03:19.713755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 13:03:19.713882       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-786745-m03\" does not exist"
	I0729 13:03:19.731155       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-786745-m03" podCIDRs=["10.244.2.0/24"]
	I0729 13:03:40.002219       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	
	
	==> kube-controller-manager [ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24] <==
	I0729 12:56:04.056361       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-786745-m02\" does not exist"
	I0729 12:56:04.095238       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-786745-m02" podCIDRs=["10.244.1.0/24"]
	I0729 12:56:04.519345       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-786745-m02"
	I0729 12:56:25.412446       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 12:56:27.724938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.214135ms"
	I0729 12:56:27.738140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.065431ms"
	I0729 12:56:27.738559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.031µs"
	I0729 12:56:27.772358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.012µs"
	I0729 12:56:32.884300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.751328ms"
	I0729 12:56:32.884642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.615µs"
	I0729 12:56:32.991003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.579306ms"
	I0729 12:56:32.991183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.964µs"
	I0729 12:57:03.729752       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 12:57:03.730183       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-786745-m03\" does not exist"
	I0729 12:57:03.765021       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-786745-m03" podCIDRs=["10.244.2.0/24"]
	I0729 12:57:04.540721       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-786745-m03"
	I0729 12:57:24.915754       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m03"
	I0729 12:57:53.230128       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 12:57:54.308362       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 12:57:54.309427       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-786745-m03\" does not exist"
	I0729 12:57:54.326680       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-786745-m03" podCIDRs=["10.244.3.0/24"]
	I0729 12:58:14.619344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 12:58:59.594058       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m03"
	I0729 12:58:59.644580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.089636ms"
	I0729 12:58:59.644952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.165µs"
	
	
	==> kube-proxy [169be91b864f4745a7086cbcfbd9f9370f3e2ebc05c21c569b7b8b28bf84c437] <==
	I0729 13:02:00.161092       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:02:00.176961       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	I0729 13:02:00.240795       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:02:00.240857       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:02:00.240875       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:02:00.248727       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:02:00.248977       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:02:00.249043       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:02:00.250978       1 config.go:192] "Starting service config controller"
	I0729 13:02:00.251009       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:02:00.251030       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:02:00.251034       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:02:00.251406       1 config.go:319] "Starting node config controller"
	I0729 13:02:00.251435       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:02:00.352177       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:02:00.352284       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:02:00.353099       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937] <==
	I0729 12:55:16.511434       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:55:16.538143       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	I0729 12:55:16.575712       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:55:16.575751       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:55:16.575767       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:55:16.579161       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:55:16.579851       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:55:16.580086       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:55:16.582244       1 config.go:192] "Starting service config controller"
	I0729 12:55:16.582708       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:55:16.582768       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:55:16.582787       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:55:16.583903       1 config.go:319] "Starting node config controller"
	I0729 12:55:16.592681       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:55:16.683699       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 12:55:16.683854       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:55:16.698102       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb] <==
	W0729 12:54:59.589893       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:54:59.589945       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:54:59.601158       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 12:54:59.601206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 12:54:59.635081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 12:54:59.635131       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 12:54:59.642276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:54:59.642327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:54:59.644444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:54:59.644466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:54:59.658327       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 12:54:59.658368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:54:59.662789       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:54:59.662833       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:54:59.675446       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:54:59.675498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:54:59.755210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:54:59.755267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:54:59.800707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:54:59.800838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0729 12:55:01.748195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 13:00:20.071972       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 13:00:20.072084       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 13:00:20.072327       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 13:00:20.072768       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4cdfb5260fd6e6f2334d8cd3862186c9ea13b8641d32a6881190957442937f51] <==
	I0729 13:01:56.659753       1 serving.go:380] Generated self-signed cert in-memory
	W0729 13:01:58.824404       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 13:01:58.824507       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 13:01:58.824518       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 13:01:58.824547       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 13:01:58.867434       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 13:01:58.867476       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:01:58.871086       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 13:01:58.871240       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 13:01:58.871275       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 13:01:58.871305       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 13:01:58.972087       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:01:56 multinode-786745 kubelet[3096]: I0729 13:01:56.736738    3096 kubelet_node_status.go:73] "Attempting to register node" node="multinode-786745"
	Jul 29 13:01:58 multinode-786745 kubelet[3096]: E0729 13:01:58.997254    3096 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-786745\" already exists" pod="kube-system/kube-controller-manager-multinode-786745"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.004883    3096 kubelet_node_status.go:112] "Node was previously registered" node="multinode-786745"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.004957    3096 kubelet_node_status.go:76] "Successfully registered node" node="multinode-786745"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.005949    3096 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.006826    3096 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.210118    3096 apiserver.go:52] "Watching apiserver"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.215543    3096 topology_manager.go:215] "Topology Admit Handler" podUID="3c2dfe5b-569e-43bf-bce8-933daf37c819" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dbqpm"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.215827    3096 topology_manager.go:215] "Topology Admit Handler" podUID="faf35352-76e1-43b1-981a-c08cdaa912c6" podNamespace="kube-system" podName="kube-proxy-x8bkl"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.216008    3096 topology_manager.go:215] "Topology Admit Handler" podUID="d02d0326-e2e4-441d-84c7-f8c8f222e641" podNamespace="kube-system" podName="kindnet-wqdqp"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.216142    3096 topology_manager.go:215] "Topology Admit Handler" podUID="3c640e19-73fc-493c-812f-d519b75297e9" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.216303    3096 topology_manager.go:215] "Topology Admit Handler" podUID="6f011b54-3ee4-49b8-9c78-08bff7fb60d8" podNamespace="default" podName="busybox-fc5497c4f-cmdrr"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.228112    3096 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.246907    3096 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/faf35352-76e1-43b1-981a-c08cdaa912c6-xtables-lock\") pod \"kube-proxy-x8bkl\" (UID: \"faf35352-76e1-43b1-981a-c08cdaa912c6\") " pod="kube-system/kube-proxy-x8bkl"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.247112    3096 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3c640e19-73fc-493c-812f-d519b75297e9-tmp\") pod \"storage-provisioner\" (UID: \"3c640e19-73fc-493c-812f-d519b75297e9\") " pod="kube-system/storage-provisioner"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.247212    3096 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d02d0326-e2e4-441d-84c7-f8c8f222e641-cni-cfg\") pod \"kindnet-wqdqp\" (UID: \"d02d0326-e2e4-441d-84c7-f8c8f222e641\") " pod="kube-system/kindnet-wqdqp"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.247228    3096 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d02d0326-e2e4-441d-84c7-f8c8f222e641-lib-modules\") pod \"kindnet-wqdqp\" (UID: \"d02d0326-e2e4-441d-84c7-f8c8f222e641\") " pod="kube-system/kindnet-wqdqp"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.247243    3096 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d02d0326-e2e4-441d-84c7-f8c8f222e641-xtables-lock\") pod \"kindnet-wqdqp\" (UID: \"d02d0326-e2e4-441d-84c7-f8c8f222e641\") " pod="kube-system/kindnet-wqdqp"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.247310    3096 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/faf35352-76e1-43b1-981a-c08cdaa912c6-lib-modules\") pod \"kube-proxy-x8bkl\" (UID: \"faf35352-76e1-43b1-981a-c08cdaa912c6\") " pod="kube-system/kube-proxy-x8bkl"
	Jul 29 13:02:01 multinode-786745 kubelet[3096]: I0729 13:02:01.422494    3096 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 13:02:55 multinode-786745 kubelet[3096]: E0729 13:02:55.270392    3096 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:02:55 multinode-786745 kubelet[3096]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:02:55 multinode-786745 kubelet[3096]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:02:55 multinode-786745 kubelet[3096]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:02:55 multinode-786745 kubelet[3096]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:03:42.444011  272082 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19341-233093/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-786745 -n multinode-786745
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-786745 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 stop
E0729 13:04:27.881020  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-786745 stop: exit status 82 (2m0.473233907s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-786745-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-786745 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-786745 status: exit status 3 (18.786206998s)

                                                
                                                
-- stdout --
	multinode-786745
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-786745-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:06:05.949131  272740 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0729 13:06:05.949170  272740 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-786745 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-786745 -n multinode-786745
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-786745 logs -n 25: (1.462158227s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m02:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745:/home/docker/cp-test_multinode-786745-m02_multinode-786745.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n multinode-786745 sudo cat                                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-786745-m02_multinode-786745.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m02:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03:/home/docker/cp-test_multinode-786745-m02_multinode-786745-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n multinode-786745-m03 sudo cat                                   | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-786745-m02_multinode-786745-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp testdata/cp-test.txt                                                | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m03:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1996079696/001/cp-test_multinode-786745-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m03:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745:/home/docker/cp-test_multinode-786745-m03_multinode-786745.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n multinode-786745 sudo cat                                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-786745-m03_multinode-786745.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m03:/home/docker/cp-test.txt                       | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m02:/home/docker/cp-test_multinode-786745-m03_multinode-786745-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n multinode-786745-m02 sudo cat                                   | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-786745-m03_multinode-786745-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-786745 node stop m03                                                          | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	| node    | multinode-786745 node start                                                             | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:58 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-786745                                                                | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:58 UTC |                     |
	| stop    | -p multinode-786745                                                                     | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 12:58 UTC |                     |
	| start   | -p multinode-786745                                                                     | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 13:00 UTC | 29 Jul 24 13:03 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-786745                                                                | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 13:03 UTC |                     |
	| node    | multinode-786745 node delete                                                            | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 13:03 UTC | 29 Jul 24 13:03 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-786745 stop                                                                   | multinode-786745 | jenkins | v1.33.1 | 29 Jul 24 13:03 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:00:19
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:00:19.138434  270927 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:00:19.138682  270927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:00:19.138691  270927 out.go:304] Setting ErrFile to fd 2...
	I0729 13:00:19.138695  270927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:00:19.138911  270927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:00:19.139425  270927 out.go:298] Setting JSON to false
	I0729 13:00:19.140302  270927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9762,"bootTime":1722248257,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:00:19.140360  270927 start.go:139] virtualization: kvm guest
	I0729 13:00:19.142601  270927 out.go:177] * [multinode-786745] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:00:19.143900  270927 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:00:19.143904  270927 notify.go:220] Checking for updates...
	I0729 13:00:19.145308  270927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:00:19.146714  270927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:00:19.147841  270927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:00:19.149066  270927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:00:19.150251  270927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:00:19.151930  270927 config.go:182] Loaded profile config "multinode-786745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:00:19.152121  270927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:00:19.152558  270927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:00:19.152594  270927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:00:19.168188  270927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34867
	I0729 13:00:19.168571  270927 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:00:19.169177  270927 main.go:141] libmachine: Using API Version  1
	I0729 13:00:19.169199  270927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:00:19.169554  270927 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:00:19.169752  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:00:19.205592  270927 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:00:19.207073  270927 start.go:297] selected driver: kvm2
	I0729 13:00:19.207086  270927 start.go:901] validating driver "kvm2" against &{Name:multinode-786745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-786745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:00:19.207278  270927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:00:19.207681  270927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:00:19.207764  270927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:00:19.222517  270927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:00:19.223335  270927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:00:19.223375  270927 cni.go:84] Creating CNI manager for ""
	I0729 13:00:19.223384  270927 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 13:00:19.223496  270927 start.go:340] cluster config:
	{Name:multinode-786745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-786745 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:00:19.223731  270927 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:00:19.225686  270927 out.go:177] * Starting "multinode-786745" primary control-plane node in "multinode-786745" cluster
	I0729 13:00:19.226889  270927 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:00:19.226926  270927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:00:19.226939  270927 cache.go:56] Caching tarball of preloaded images
	I0729 13:00:19.227019  270927 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:00:19.227029  270927 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:00:19.227147  270927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/config.json ...
	I0729 13:00:19.227335  270927 start.go:360] acquireMachinesLock for multinode-786745: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:00:19.227376  270927 start.go:364] duration metric: took 24.47µs to acquireMachinesLock for "multinode-786745"
	I0729 13:00:19.227390  270927 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:00:19.227397  270927 fix.go:54] fixHost starting: 
	I0729 13:00:19.227649  270927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:00:19.227685  270927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:00:19.241823  270927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0729 13:00:19.242269  270927 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:00:19.242944  270927 main.go:141] libmachine: Using API Version  1
	I0729 13:00:19.242981  270927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:00:19.243300  270927 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:00:19.243477  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:00:19.243710  270927 main.go:141] libmachine: (multinode-786745) Calling .GetState
	I0729 13:00:19.245192  270927 fix.go:112] recreateIfNeeded on multinode-786745: state=Running err=<nil>
	W0729 13:00:19.245214  270927 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:00:19.247368  270927 out.go:177] * Updating the running kvm2 "multinode-786745" VM ...
	I0729 13:00:19.248976  270927 machine.go:94] provisionDockerMachine start ...
	I0729 13:00:19.248994  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:00:19.249199  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.251473  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.251923  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.251952  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.252055  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:00:19.252235  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.252416  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.252572  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:00:19.252753  270927 main.go:141] libmachine: Using SSH client type: native
	I0729 13:00:19.253049  270927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0729 13:00:19.253066  270927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:00:19.358302  270927 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-786745
	
	I0729 13:00:19.358342  270927 main.go:141] libmachine: (multinode-786745) Calling .GetMachineName
	I0729 13:00:19.358602  270927 buildroot.go:166] provisioning hostname "multinode-786745"
	I0729 13:00:19.358636  270927 main.go:141] libmachine: (multinode-786745) Calling .GetMachineName
	I0729 13:00:19.358882  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.361972  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.362417  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.362449  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.362602  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:00:19.362792  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.362981  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.363146  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:00:19.363345  270927 main.go:141] libmachine: Using SSH client type: native
	I0729 13:00:19.363516  270927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0729 13:00:19.363530  270927 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-786745 && echo "multinode-786745" | sudo tee /etc/hostname
	I0729 13:00:19.489905  270927 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-786745
	
	I0729 13:00:19.489931  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.492518  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.492939  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.492970  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.493175  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:00:19.493357  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.493547  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.493676  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:00:19.493865  270927 main.go:141] libmachine: Using SSH client type: native
	I0729 13:00:19.494121  270927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0729 13:00:19.494149  270927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-786745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-786745/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-786745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:00:19.597687  270927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:00:19.597729  270927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:00:19.597773  270927 buildroot.go:174] setting up certificates
	I0729 13:00:19.597783  270927 provision.go:84] configureAuth start
	I0729 13:00:19.597795  270927 main.go:141] libmachine: (multinode-786745) Calling .GetMachineName
	I0729 13:00:19.598081  270927 main.go:141] libmachine: (multinode-786745) Calling .GetIP
	I0729 13:00:19.600503  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.600847  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.600898  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.601037  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.603514  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.603963  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.604000  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.604165  270927 provision.go:143] copyHostCerts
	I0729 13:00:19.604197  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:00:19.604227  270927 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:00:19.604237  270927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:00:19.604320  270927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:00:19.604408  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:00:19.604426  270927 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:00:19.604433  270927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:00:19.604457  270927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:00:19.604537  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:00:19.604553  270927 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:00:19.604559  270927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:00:19.604587  270927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:00:19.604650  270927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.multinode-786745 san=[127.0.0.1 192.168.39.10 localhost minikube multinode-786745]
	I0729 13:00:19.778306  270927 provision.go:177] copyRemoteCerts
	I0729 13:00:19.778379  270927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:00:19.778404  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.780941  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.781307  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.781341  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.781516  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:00:19.781747  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.781917  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:00:19.782093  270927 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/multinode-786745/id_rsa Username:docker}
	I0729 13:00:19.866254  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 13:00:19.866338  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:00:19.896402  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 13:00:19.896469  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 13:00:19.922708  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 13:00:19.922788  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:00:19.948346  270927 provision.go:87] duration metric: took 350.547839ms to configureAuth
	I0729 13:00:19.948372  270927 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:00:19.948578  270927 config.go:182] Loaded profile config "multinode-786745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:00:19.948650  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:00:19.951153  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.951524  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:00:19.951547  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:00:19.951728  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:00:19.951941  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.952085  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:00:19.952212  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:00:19.952358  270927 main.go:141] libmachine: Using SSH client type: native
	I0729 13:00:19.952515  270927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0729 13:00:19.952531  270927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:01:50.771390  270927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:01:50.771431  270927 machine.go:97] duration metric: took 1m31.522440977s to provisionDockerMachine
	I0729 13:01:50.771444  270927 start.go:293] postStartSetup for "multinode-786745" (driver="kvm2")
	I0729 13:01:50.771455  270927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:01:50.771479  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:01:50.771856  270927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:01:50.771888  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:01:50.774794  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:50.775255  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:50.775284  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:50.775437  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:01:50.775654  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:01:50.775845  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:01:50.775976  270927 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/multinode-786745/id_rsa Username:docker}
	I0729 13:01:50.860510  270927 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:01:50.864358  270927 command_runner.go:130] > NAME=Buildroot
	I0729 13:01:50.864381  270927 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 13:01:50.864388  270927 command_runner.go:130] > ID=buildroot
	I0729 13:01:50.864399  270927 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 13:01:50.864407  270927 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 13:01:50.864459  270927 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:01:50.864475  270927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:01:50.864553  270927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:01:50.864690  270927 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:01:50.864706  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /etc/ssl/certs/2403402.pem
	I0729 13:01:50.864808  270927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:01:50.874090  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:01:50.896956  270927 start.go:296] duration metric: took 125.497949ms for postStartSetup
	I0729 13:01:50.896994  270927 fix.go:56] duration metric: took 1m31.669596653s for fixHost
	I0729 13:01:50.897018  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:01:50.899392  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:50.899756  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:50.899812  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:50.899932  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:01:50.900138  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:01:50.900298  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:01:50.900401  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:01:50.900581  270927 main.go:141] libmachine: Using SSH client type: native
	I0729 13:01:50.900745  270927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0729 13:01:50.900759  270927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:01:51.001463  270927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722258110.986148030
	
	I0729 13:01:51.001488  270927 fix.go:216] guest clock: 1722258110.986148030
	I0729 13:01:51.001495  270927 fix.go:229] Guest: 2024-07-29 13:01:50.98614803 +0000 UTC Remote: 2024-07-29 13:01:50.896998468 +0000 UTC m=+91.793639616 (delta=89.149562ms)
	I0729 13:01:51.001541  270927 fix.go:200] guest clock delta is within tolerance: 89.149562ms
	I0729 13:01:51.001548  270927 start.go:83] releasing machines lock for "multinode-786745", held for 1m31.774163124s
	I0729 13:01:51.001574  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:01:51.001888  270927 main.go:141] libmachine: (multinode-786745) Calling .GetIP
	I0729 13:01:51.004497  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.004945  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:51.004974  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.005139  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:01:51.005665  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:01:51.005880  270927 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 13:01:51.005957  270927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:01:51.006000  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:01:51.006116  270927 ssh_runner.go:195] Run: cat /version.json
	I0729 13:01:51.006135  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 13:01:51.008460  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.008630  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.008854  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:51.008881  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.009025  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:01:51.009140  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:51.009169  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:51.009188  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:01:51.009337  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 13:01:51.009338  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:01:51.009488  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 13:01:51.009495  270927 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/multinode-786745/id_rsa Username:docker}
	I0729 13:01:51.009666  270927 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 13:01:51.009826  270927 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/multinode-786745/id_rsa Username:docker}
	I0729 13:01:51.107379  270927 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 13:01:51.107472  270927 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 13:01:51.107561  270927 ssh_runner.go:195] Run: systemctl --version
	I0729 13:01:51.113224  270927 command_runner.go:130] > systemd 252 (252)
	I0729 13:01:51.113251  270927 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 13:01:51.113506  270927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:01:51.279065  270927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 13:01:51.286942  270927 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 13:01:51.287162  270927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:01:51.287232  270927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:01:51.298502  270927 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 13:01:51.298528  270927 start.go:495] detecting cgroup driver to use...
	I0729 13:01:51.298595  270927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:01:51.318825  270927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:01:51.334647  270927 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:01:51.334711  270927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:01:51.349061  270927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:01:51.362617  270927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:01:51.510176  270927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:01:51.650361  270927 docker.go:233] disabling docker service ...
	I0729 13:01:51.650429  270927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:01:51.668133  270927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:01:51.682041  270927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:01:51.818090  270927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:01:51.954254  270927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:01:51.968596  270927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:01:51.986321  270927 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 13:01:51.986638  270927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:01:51.986689  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:51.998083  270927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:01:51.998146  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.008456  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.019330  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.029762  270927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:01:52.040847  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.051356  270927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.061891  270927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:01:52.072165  270927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:01:52.081208  270927 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 13:01:52.081271  270927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:01:52.090464  270927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:01:52.225725  270927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:01:52.463138  270927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:01:52.463213  270927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:01:52.467970  270927 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 13:01:52.467990  270927 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 13:01:52.467996  270927 command_runner.go:130] > Device: 0,22	Inode: 1347        Links: 1
	I0729 13:01:52.468009  270927 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 13:01:52.468014  270927 command_runner.go:130] > Access: 2024-07-29 13:01:52.344171509 +0000
	I0729 13:01:52.468033  270927 command_runner.go:130] > Modify: 2024-07-29 13:01:52.344171509 +0000
	I0729 13:01:52.468040  270927 command_runner.go:130] > Change: 2024-07-29 13:01:52.344171509 +0000
	I0729 13:01:52.468044  270927 command_runner.go:130] >  Birth: -
	I0729 13:01:52.468055  270927 start.go:563] Will wait 60s for crictl version
	I0729 13:01:52.468093  270927 ssh_runner.go:195] Run: which crictl
	I0729 13:01:52.471887  270927 command_runner.go:130] > /usr/bin/crictl
	I0729 13:01:52.471949  270927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:01:52.507672  270927 command_runner.go:130] > Version:  0.1.0
	I0729 13:01:52.507694  270927 command_runner.go:130] > RuntimeName:  cri-o
	I0729 13:01:52.507702  270927 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 13:01:52.507710  270927 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 13:01:52.507840  270927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:01:52.507913  270927 ssh_runner.go:195] Run: crio --version
	I0729 13:01:52.535049  270927 command_runner.go:130] > crio version 1.29.1
	I0729 13:01:52.535072  270927 command_runner.go:130] > Version:        1.29.1
	I0729 13:01:52.535080  270927 command_runner.go:130] > GitCommit:      unknown
	I0729 13:01:52.535086  270927 command_runner.go:130] > GitCommitDate:  unknown
	I0729 13:01:52.535091  270927 command_runner.go:130] > GitTreeState:   clean
	I0729 13:01:52.535099  270927 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 13:01:52.535105  270927 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 13:01:52.535111  270927 command_runner.go:130] > Compiler:       gc
	I0729 13:01:52.535120  270927 command_runner.go:130] > Platform:       linux/amd64
	I0729 13:01:52.535130  270927 command_runner.go:130] > Linkmode:       dynamic
	I0729 13:01:52.535137  270927 command_runner.go:130] > BuildTags:      
	I0729 13:01:52.535145  270927 command_runner.go:130] >   containers_image_ostree_stub
	I0729 13:01:52.535152  270927 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 13:01:52.535190  270927 command_runner.go:130] >   btrfs_noversion
	I0729 13:01:52.535206  270927 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 13:01:52.535214  270927 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 13:01:52.535220  270927 command_runner.go:130] >   seccomp
	I0729 13:01:52.535228  270927 command_runner.go:130] > LDFlags:          unknown
	I0729 13:01:52.535236  270927 command_runner.go:130] > SeccompEnabled:   true
	I0729 13:01:52.535246  270927 command_runner.go:130] > AppArmorEnabled:  false
	I0729 13:01:52.536356  270927 ssh_runner.go:195] Run: crio --version
	I0729 13:01:52.562387  270927 command_runner.go:130] > crio version 1.29.1
	I0729 13:01:52.562416  270927 command_runner.go:130] > Version:        1.29.1
	I0729 13:01:52.562425  270927 command_runner.go:130] > GitCommit:      unknown
	I0729 13:01:52.562431  270927 command_runner.go:130] > GitCommitDate:  unknown
	I0729 13:01:52.562438  270927 command_runner.go:130] > GitTreeState:   clean
	I0729 13:01:52.562447  270927 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 13:01:52.562454  270927 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 13:01:52.562461  270927 command_runner.go:130] > Compiler:       gc
	I0729 13:01:52.562469  270927 command_runner.go:130] > Platform:       linux/amd64
	I0729 13:01:52.562479  270927 command_runner.go:130] > Linkmode:       dynamic
	I0729 13:01:52.562485  270927 command_runner.go:130] > BuildTags:      
	I0729 13:01:52.562491  270927 command_runner.go:130] >   containers_image_ostree_stub
	I0729 13:01:52.562498  270927 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 13:01:52.562505  270927 command_runner.go:130] >   btrfs_noversion
	I0729 13:01:52.562514  270927 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 13:01:52.562521  270927 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 13:01:52.562529  270927 command_runner.go:130] >   seccomp
	I0729 13:01:52.562537  270927 command_runner.go:130] > LDFlags:          unknown
	I0729 13:01:52.562546  270927 command_runner.go:130] > SeccompEnabled:   true
	I0729 13:01:52.562553  270927 command_runner.go:130] > AppArmorEnabled:  false
	I0729 13:01:52.568668  270927 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:01:52.573055  270927 main.go:141] libmachine: (multinode-786745) Calling .GetIP
	I0729 13:01:52.575318  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:52.575621  270927 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 13:01:52.575665  270927 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 13:01:52.575850  270927 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:01:52.580174  270927 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 13:01:52.580274  270927 kubeadm.go:883] updating cluster {Name:multinode-786745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-786745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:01:52.580423  270927 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:01:52.580473  270927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:01:52.629879  270927 command_runner.go:130] > {
	I0729 13:01:52.629902  270927 command_runner.go:130] >   "images": [
	I0729 13:01:52.629908  270927 command_runner.go:130] >     {
	I0729 13:01:52.629923  270927 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 13:01:52.629929  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.629986  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 13:01:52.630003  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630011  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630032  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 13:01:52.630048  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 13:01:52.630056  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630065  270927 command_runner.go:130] >       "size": "87165492",
	I0729 13:01:52.630073  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.630080  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630094  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630104  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630113  270927 command_runner.go:130] >     },
	I0729 13:01:52.630119  270927 command_runner.go:130] >     {
	I0729 13:01:52.630132  270927 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 13:01:52.630142  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630154  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 13:01:52.630166  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630175  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630187  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 13:01:52.630202  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 13:01:52.630210  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630218  270927 command_runner.go:130] >       "size": "87174707",
	I0729 13:01:52.630226  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.630237  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630247  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630254  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630262  270927 command_runner.go:130] >     },
	I0729 13:01:52.630269  270927 command_runner.go:130] >     {
	I0729 13:01:52.630280  270927 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 13:01:52.630289  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630299  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 13:01:52.630308  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630315  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630331  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 13:01:52.630347  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 13:01:52.630354  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630361  270927 command_runner.go:130] >       "size": "1363676",
	I0729 13:01:52.630369  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.630378  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630387  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630397  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630404  270927 command_runner.go:130] >     },
	I0729 13:01:52.630411  270927 command_runner.go:130] >     {
	I0729 13:01:52.630424  270927 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 13:01:52.630432  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630442  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 13:01:52.630450  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630457  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630472  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 13:01:52.630492  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 13:01:52.630500  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630508  270927 command_runner.go:130] >       "size": "31470524",
	I0729 13:01:52.630517  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.630527  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630534  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630541  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630550  270927 command_runner.go:130] >     },
	I0729 13:01:52.630558  270927 command_runner.go:130] >     {
	I0729 13:01:52.630571  270927 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 13:01:52.630580  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630589  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 13:01:52.630598  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630606  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630621  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 13:01:52.630636  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 13:01:52.630643  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630651  270927 command_runner.go:130] >       "size": "61245718",
	I0729 13:01:52.630660  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.630669  270927 command_runner.go:130] >       "username": "nonroot",
	I0729 13:01:52.630679  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630688  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630696  270927 command_runner.go:130] >     },
	I0729 13:01:52.630703  270927 command_runner.go:130] >     {
	I0729 13:01:52.630715  270927 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 13:01:52.630723  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630732  270927 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 13:01:52.630739  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630746  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630761  270927 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 13:01:52.630775  270927 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 13:01:52.630784  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630794  270927 command_runner.go:130] >       "size": "150779692",
	I0729 13:01:52.630804  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.630811  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.630817  270927 command_runner.go:130] >       },
	I0729 13:01:52.630824  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630831  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.630841  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.630848  270927 command_runner.go:130] >     },
	I0729 13:01:52.630856  270927 command_runner.go:130] >     {
	I0729 13:01:52.630867  270927 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 13:01:52.630880  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.630891  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 13:01:52.630899  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630906  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.630921  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 13:01:52.630936  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 13:01:52.630944  270927 command_runner.go:130] >       ],
	I0729 13:01:52.630953  270927 command_runner.go:130] >       "size": "117609954",
	I0729 13:01:52.630961  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.630968  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.630976  270927 command_runner.go:130] >       },
	I0729 13:01:52.630984  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.630992  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.631006  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.631013  270927 command_runner.go:130] >     },
	I0729 13:01:52.631029  270927 command_runner.go:130] >     {
	I0729 13:01:52.631042  270927 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 13:01:52.631051  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.631062  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 13:01:52.631071  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631080  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.631101  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 13:01:52.631115  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 13:01:52.631120  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631128  270927 command_runner.go:130] >       "size": "112198984",
	I0729 13:01:52.631138  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.631148  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.631155  270927 command_runner.go:130] >       },
	I0729 13:01:52.631163  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.631169  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.631177  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.631182  270927 command_runner.go:130] >     },
	I0729 13:01:52.631187  270927 command_runner.go:130] >     {
	I0729 13:01:52.631195  270927 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 13:01:52.631201  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.631208  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 13:01:52.631212  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631217  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.631229  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 13:01:52.631240  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 13:01:52.631246  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631253  270927 command_runner.go:130] >       "size": "85953945",
	I0729 13:01:52.631259  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.631266  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.631272  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.631279  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.631285  270927 command_runner.go:130] >     },
	I0729 13:01:52.631290  270927 command_runner.go:130] >     {
	I0729 13:01:52.631300  270927 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 13:01:52.631307  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.631314  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 13:01:52.631322  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631330  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.631343  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 13:01:52.631358  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 13:01:52.631366  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631375  270927 command_runner.go:130] >       "size": "63051080",
	I0729 13:01:52.631385  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.631394  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.631401  270927 command_runner.go:130] >       },
	I0729 13:01:52.631413  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.631421  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.631428  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.631434  270927 command_runner.go:130] >     },
	I0729 13:01:52.631442  270927 command_runner.go:130] >     {
	I0729 13:01:52.631452  270927 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 13:01:52.631461  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.631471  270927 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 13:01:52.631479  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631486  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.631501  270927 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 13:01:52.631515  270927 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 13:01:52.631524  270927 command_runner.go:130] >       ],
	I0729 13:01:52.631531  270927 command_runner.go:130] >       "size": "750414",
	I0729 13:01:52.631540  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.631548  270927 command_runner.go:130] >         "value": "65535"
	I0729 13:01:52.631556  270927 command_runner.go:130] >       },
	I0729 13:01:52.631564  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.631573  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.631582  270927 command_runner.go:130] >       "pinned": true
	I0729 13:01:52.631591  270927 command_runner.go:130] >     }
	I0729 13:01:52.631597  270927 command_runner.go:130] >   ]
	I0729 13:01:52.631603  270927 command_runner.go:130] > }
	I0729 13:01:52.631848  270927 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:01:52.631865  270927 crio.go:433] Images already preloaded, skipping extraction
	I0729 13:01:52.631929  270927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:01:52.669878  270927 command_runner.go:130] > {
	I0729 13:01:52.669905  270927 command_runner.go:130] >   "images": [
	I0729 13:01:52.669911  270927 command_runner.go:130] >     {
	I0729 13:01:52.669920  270927 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 13:01:52.669926  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.669932  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 13:01:52.669936  270927 command_runner.go:130] >       ],
	I0729 13:01:52.669940  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.669948  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 13:01:52.669956  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 13:01:52.669960  270927 command_runner.go:130] >       ],
	I0729 13:01:52.669966  270927 command_runner.go:130] >       "size": "87165492",
	I0729 13:01:52.669974  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.669981  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.669991  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670005  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670013  270927 command_runner.go:130] >     },
	I0729 13:01:52.670016  270927 command_runner.go:130] >     {
	I0729 13:01:52.670022  270927 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 13:01:52.670026  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670031  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 13:01:52.670035  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670040  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670048  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 13:01:52.670062  270927 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 13:01:52.670072  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670084  270927 command_runner.go:130] >       "size": "87174707",
	I0729 13:01:52.670093  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.670104  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670113  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670120  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670124  270927 command_runner.go:130] >     },
	I0729 13:01:52.670130  270927 command_runner.go:130] >     {
	I0729 13:01:52.670137  270927 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 13:01:52.670147  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670158  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 13:01:52.670167  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670176  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670190  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 13:01:52.670204  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 13:01:52.670211  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670216  270927 command_runner.go:130] >       "size": "1363676",
	I0729 13:01:52.670225  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.670235  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670246  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670255  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670263  270927 command_runner.go:130] >     },
	I0729 13:01:52.670271  270927 command_runner.go:130] >     {
	I0729 13:01:52.670284  270927 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 13:01:52.670293  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670300  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 13:01:52.670306  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670313  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670328  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 13:01:52.670348  270927 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 13:01:52.670356  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670363  270927 command_runner.go:130] >       "size": "31470524",
	I0729 13:01:52.670371  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.670380  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670384  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670392  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670397  270927 command_runner.go:130] >     },
	I0729 13:01:52.670407  270927 command_runner.go:130] >     {
	I0729 13:01:52.670420  270927 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 13:01:52.670429  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670440  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 13:01:52.670449  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670457  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670472  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 13:01:52.670485  270927 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 13:01:52.670493  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670499  270927 command_runner.go:130] >       "size": "61245718",
	I0729 13:01:52.670507  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.670512  270927 command_runner.go:130] >       "username": "nonroot",
	I0729 13:01:52.670521  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670527  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670534  270927 command_runner.go:130] >     },
	I0729 13:01:52.670539  270927 command_runner.go:130] >     {
	I0729 13:01:52.670551  270927 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 13:01:52.670559  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670566  270927 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 13:01:52.670574  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670580  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670592  270927 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 13:01:52.670605  270927 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 13:01:52.670611  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670620  270927 command_runner.go:130] >       "size": "150779692",
	I0729 13:01:52.670626  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.670635  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.670644  270927 command_runner.go:130] >       },
	I0729 13:01:52.670651  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670660  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670666  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670674  270927 command_runner.go:130] >     },
	I0729 13:01:52.670680  270927 command_runner.go:130] >     {
	I0729 13:01:52.670693  270927 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 13:01:52.670702  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670711  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 13:01:52.670721  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670731  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670745  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 13:01:52.670757  270927 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 13:01:52.670765  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670770  270927 command_runner.go:130] >       "size": "117609954",
	I0729 13:01:52.670773  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.670778  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.670781  270927 command_runner.go:130] >       },
	I0729 13:01:52.670787  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670793  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670797  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670801  270927 command_runner.go:130] >     },
	I0729 13:01:52.670804  270927 command_runner.go:130] >     {
	I0729 13:01:52.670810  270927 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 13:01:52.670816  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670823  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 13:01:52.670828  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670832  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670848  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 13:01:52.670858  270927 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 13:01:52.670861  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670868  270927 command_runner.go:130] >       "size": "112198984",
	I0729 13:01:52.670872  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.670878  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.670882  270927 command_runner.go:130] >       },
	I0729 13:01:52.670887  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670892  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670896  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670902  270927 command_runner.go:130] >     },
	I0729 13:01:52.670906  270927 command_runner.go:130] >     {
	I0729 13:01:52.670912  270927 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 13:01:52.670918  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.670923  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 13:01:52.670929  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670933  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.670943  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 13:01:52.670951  270927 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 13:01:52.670957  270927 command_runner.go:130] >       ],
	I0729 13:01:52.670960  270927 command_runner.go:130] >       "size": "85953945",
	I0729 13:01:52.670964  270927 command_runner.go:130] >       "uid": null,
	I0729 13:01:52.670968  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.670972  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.670976  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.670981  270927 command_runner.go:130] >     },
	I0729 13:01:52.670984  270927 command_runner.go:130] >     {
	I0729 13:01:52.670993  270927 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 13:01:52.671000  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.671007  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 13:01:52.671011  270927 command_runner.go:130] >       ],
	I0729 13:01:52.671015  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.671023  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 13:01:52.671032  270927 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 13:01:52.671036  270927 command_runner.go:130] >       ],
	I0729 13:01:52.671041  270927 command_runner.go:130] >       "size": "63051080",
	I0729 13:01:52.671047  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.671051  270927 command_runner.go:130] >         "value": "0"
	I0729 13:01:52.671054  270927 command_runner.go:130] >       },
	I0729 13:01:52.671058  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.671064  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.671068  270927 command_runner.go:130] >       "pinned": false
	I0729 13:01:52.671074  270927 command_runner.go:130] >     },
	I0729 13:01:52.671077  270927 command_runner.go:130] >     {
	I0729 13:01:52.671084  270927 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 13:01:52.671089  270927 command_runner.go:130] >       "repoTags": [
	I0729 13:01:52.671094  270927 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 13:01:52.671098  270927 command_runner.go:130] >       ],
	I0729 13:01:52.671102  270927 command_runner.go:130] >       "repoDigests": [
	I0729 13:01:52.671111  270927 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 13:01:52.671118  270927 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 13:01:52.671124  270927 command_runner.go:130] >       ],
	I0729 13:01:52.671127  270927 command_runner.go:130] >       "size": "750414",
	I0729 13:01:52.671132  270927 command_runner.go:130] >       "uid": {
	I0729 13:01:52.671136  270927 command_runner.go:130] >         "value": "65535"
	I0729 13:01:52.671140  270927 command_runner.go:130] >       },
	I0729 13:01:52.671147  270927 command_runner.go:130] >       "username": "",
	I0729 13:01:52.671150  270927 command_runner.go:130] >       "spec": null,
	I0729 13:01:52.671154  270927 command_runner.go:130] >       "pinned": true
	I0729 13:01:52.671158  270927 command_runner.go:130] >     }
	I0729 13:01:52.671161  270927 command_runner.go:130] >   ]
	I0729 13:01:52.671164  270927 command_runner.go:130] > }
	I0729 13:01:52.671282  270927 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:01:52.671294  270927 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:01:52.671303  270927 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.30.3 crio true true} ...
	I0729 13:01:52.671420  270927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-786745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-786745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:01:52.671522  270927 ssh_runner.go:195] Run: crio config
	I0729 13:01:52.712408  270927 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 13:01:52.712444  270927 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 13:01:52.712455  270927 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 13:01:52.712461  270927 command_runner.go:130] > #
	I0729 13:01:52.712472  270927 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 13:01:52.712481  270927 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 13:01:52.712490  270927 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 13:01:52.712503  270927 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 13:01:52.712512  270927 command_runner.go:130] > # reload'.
	I0729 13:01:52.712521  270927 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 13:01:52.712532  270927 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 13:01:52.712544  270927 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 13:01:52.712555  270927 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 13:01:52.712562  270927 command_runner.go:130] > [crio]
	I0729 13:01:52.712571  270927 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 13:01:52.712582  270927 command_runner.go:130] > # containers images, in this directory.
	I0729 13:01:52.712592  270927 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 13:01:52.712605  270927 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 13:01:52.712689  270927 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 13:01:52.712730  270927 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 13:01:52.712994  270927 command_runner.go:130] > # imagestore = ""
	I0729 13:01:52.713024  270927 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 13:01:52.713034  270927 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 13:01:52.713146  270927 command_runner.go:130] > storage_driver = "overlay"
	I0729 13:01:52.713162  270927 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 13:01:52.713171  270927 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 13:01:52.713181  270927 command_runner.go:130] > storage_option = [
	I0729 13:01:52.713332  270927 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 13:01:52.713341  270927 command_runner.go:130] > ]
	I0729 13:01:52.713351  270927 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 13:01:52.713360  270927 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 13:01:52.713661  270927 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 13:01:52.713680  270927 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 13:01:52.713689  270927 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 13:01:52.713696  270927 command_runner.go:130] > # always happen on a node reboot
	I0729 13:01:52.713881  270927 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 13:01:52.713902  270927 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 13:01:52.713922  270927 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 13:01:52.713934  270927 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 13:01:52.713984  270927 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 13:01:52.714011  270927 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 13:01:52.714024  270927 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 13:01:52.714209  270927 command_runner.go:130] > # internal_wipe = true
	I0729 13:01:52.714221  270927 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 13:01:52.714230  270927 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 13:01:52.714389  270927 command_runner.go:130] > # internal_repair = false
	I0729 13:01:52.714398  270927 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 13:01:52.714407  270927 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 13:01:52.714417  270927 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 13:01:52.714738  270927 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 13:01:52.714748  270927 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 13:01:52.714753  270927 command_runner.go:130] > [crio.api]
	I0729 13:01:52.714761  270927 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 13:01:52.714942  270927 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 13:01:52.714961  270927 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 13:01:52.715185  270927 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 13:01:52.715202  270927 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 13:01:52.715210  270927 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 13:01:52.715428  270927 command_runner.go:130] > # stream_port = "0"
	I0729 13:01:52.715444  270927 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 13:01:52.715709  270927 command_runner.go:130] > # stream_enable_tls = false
	I0729 13:01:52.715723  270927 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 13:01:52.715966  270927 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 13:01:52.715980  270927 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 13:01:52.715989  270927 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 13:01:52.715996  270927 command_runner.go:130] > # minutes.
	I0729 13:01:52.716106  270927 command_runner.go:130] > # stream_tls_cert = ""
	I0729 13:01:52.716123  270927 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 13:01:52.716133  270927 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 13:01:52.716313  270927 command_runner.go:130] > # stream_tls_key = ""
	I0729 13:01:52.716326  270927 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 13:01:52.716336  270927 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 13:01:52.716355  270927 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 13:01:52.716621  270927 command_runner.go:130] > # stream_tls_ca = ""
	I0729 13:01:52.716639  270927 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 13:01:52.716649  270927 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 13:01:52.716660  270927 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 13:01:52.716672  270927 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 13:01:52.716681  270927 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 13:01:52.716691  270927 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 13:01:52.716700  270927 command_runner.go:130] > [crio.runtime]
	I0729 13:01:52.716710  270927 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 13:01:52.716721  270927 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 13:01:52.716730  270927 command_runner.go:130] > # "nofile=1024:2048"
	I0729 13:01:52.716742  270927 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 13:01:52.716752  270927 command_runner.go:130] > # default_ulimits = [
	I0729 13:01:52.716759  270927 command_runner.go:130] > # ]
	I0729 13:01:52.716772  270927 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 13:01:52.716841  270927 command_runner.go:130] > # no_pivot = false
	I0729 13:01:52.716861  270927 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 13:01:52.716870  270927 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 13:01:52.716878  270927 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 13:01:52.716888  270927 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 13:01:52.716898  270927 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 13:01:52.716910  270927 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 13:01:52.716920  270927 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 13:01:52.716927  270927 command_runner.go:130] > # Cgroup setting for conmon
	I0729 13:01:52.716939  270927 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 13:01:52.716948  270927 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 13:01:52.716957  270927 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 13:01:52.716968  270927 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 13:01:52.716980  270927 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 13:01:52.716990  270927 command_runner.go:130] > conmon_env = [
	I0729 13:01:52.717001  270927 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 13:01:52.717009  270927 command_runner.go:130] > ]
	I0729 13:01:52.717018  270927 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 13:01:52.717027  270927 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 13:01:52.717039  270927 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 13:01:52.717046  270927 command_runner.go:130] > # default_env = [
	I0729 13:01:52.717054  270927 command_runner.go:130] > # ]
	I0729 13:01:52.717064  270927 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 13:01:52.717079  270927 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 13:01:52.717145  270927 command_runner.go:130] > # selinux = false
	I0729 13:01:52.717166  270927 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 13:01:52.717188  270927 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 13:01:52.717199  270927 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 13:01:52.717207  270927 command_runner.go:130] > # seccomp_profile = ""
	I0729 13:01:52.717219  270927 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 13:01:52.717228  270927 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 13:01:52.717244  270927 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 13:01:52.717254  270927 command_runner.go:130] > # which might increase security.
	I0729 13:01:52.717264  270927 command_runner.go:130] > # This option is currently deprecated,
	I0729 13:01:52.717276  270927 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 13:01:52.717285  270927 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 13:01:52.717296  270927 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 13:01:52.717308  270927 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 13:01:52.717319  270927 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 13:01:52.717332  270927 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 13:01:52.717342  270927 command_runner.go:130] > # This option supports live configuration reload.
	I0729 13:01:52.717351  270927 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 13:01:52.717363  270927 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 13:01:52.717374  270927 command_runner.go:130] > # the cgroup blockio controller.
	I0729 13:01:52.717381  270927 command_runner.go:130] > # blockio_config_file = ""
	I0729 13:01:52.717393  270927 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 13:01:52.717402  270927 command_runner.go:130] > # blockio parameters.
	I0729 13:01:52.717410  270927 command_runner.go:130] > # blockio_reload = false
	I0729 13:01:52.717423  270927 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 13:01:52.717430  270927 command_runner.go:130] > # irqbalance daemon.
	I0729 13:01:52.717440  270927 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 13:01:52.717452  270927 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 13:01:52.717466  270927 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 13:01:52.717482  270927 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 13:01:52.717494  270927 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 13:01:52.717507  270927 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 13:01:52.717518  270927 command_runner.go:130] > # This option supports live configuration reload.
	I0729 13:01:52.717528  270927 command_runner.go:130] > # rdt_config_file = ""
	I0729 13:01:52.717539  270927 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 13:01:52.717551  270927 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 13:01:52.717575  270927 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 13:01:52.717587  270927 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 13:01:52.717598  270927 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 13:01:52.717612  270927 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 13:01:52.717621  270927 command_runner.go:130] > # will be added.
	I0729 13:01:52.717627  270927 command_runner.go:130] > # default_capabilities = [
	I0729 13:01:52.717636  270927 command_runner.go:130] > # 	"CHOWN",
	I0729 13:01:52.717644  270927 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 13:01:52.717653  270927 command_runner.go:130] > # 	"FSETID",
	I0729 13:01:52.717658  270927 command_runner.go:130] > # 	"FOWNER",
	I0729 13:01:52.717665  270927 command_runner.go:130] > # 	"SETGID",
	I0729 13:01:52.717673  270927 command_runner.go:130] > # 	"SETUID",
	I0729 13:01:52.717678  270927 command_runner.go:130] > # 	"SETPCAP",
	I0729 13:01:52.717695  270927 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 13:01:52.717705  270927 command_runner.go:130] > # 	"KILL",
	I0729 13:01:52.717710  270927 command_runner.go:130] > # ]
	I0729 13:01:52.717729  270927 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 13:01:52.717743  270927 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 13:01:52.717758  270927 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 13:01:52.717773  270927 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 13:01:52.717785  270927 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 13:01:52.717794  270927 command_runner.go:130] > default_sysctls = [
	I0729 13:01:52.717802  270927 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 13:01:52.717810  270927 command_runner.go:130] > ]
	I0729 13:01:52.717818  270927 command_runner.go:130] > # List of devices on the host that a
	I0729 13:01:52.717829  270927 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 13:01:52.717839  270927 command_runner.go:130] > # allowed_devices = [
	I0729 13:01:52.717848  270927 command_runner.go:130] > # 	"/dev/fuse",
	I0729 13:01:52.717855  270927 command_runner.go:130] > # ]
	I0729 13:01:52.717862  270927 command_runner.go:130] > # List of additional devices. specified as
	I0729 13:01:52.717875  270927 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 13:01:52.717887  270927 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 13:01:52.717899  270927 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 13:01:52.717909  270927 command_runner.go:130] > # additional_devices = [
	I0729 13:01:52.717914  270927 command_runner.go:130] > # ]
	I0729 13:01:52.717925  270927 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 13:01:52.717932  270927 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 13:01:52.717941  270927 command_runner.go:130] > # 	"/etc/cdi",
	I0729 13:01:52.717948  270927 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 13:01:52.717956  270927 command_runner.go:130] > # ]
	I0729 13:01:52.717965  270927 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 13:01:52.717978  270927 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 13:01:52.717987  270927 command_runner.go:130] > # Defaults to false.
	I0729 13:01:52.718006  270927 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 13:01:52.718018  270927 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 13:01:52.718028  270927 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 13:01:52.718038  270927 command_runner.go:130] > # hooks_dir = [
	I0729 13:01:52.718045  270927 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 13:01:52.718052  270927 command_runner.go:130] > # ]
	I0729 13:01:52.718062  270927 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 13:01:52.718077  270927 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 13:01:52.718087  270927 command_runner.go:130] > # its default mounts from the following two files:
	I0729 13:01:52.718095  270927 command_runner.go:130] > #
	I0729 13:01:52.718104  270927 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 13:01:52.718116  270927 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 13:01:52.718125  270927 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 13:01:52.718133  270927 command_runner.go:130] > #
	I0729 13:01:52.718142  270927 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 13:01:52.718156  270927 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 13:01:52.718169  270927 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 13:01:52.718180  270927 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 13:01:52.718187  270927 command_runner.go:130] > #
	I0729 13:01:52.718197  270927 command_runner.go:130] > # default_mounts_file = ""
	I0729 13:01:52.718209  270927 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 13:01:52.718221  270927 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 13:01:52.718229  270927 command_runner.go:130] > pids_limit = 1024
	I0729 13:01:52.718242  270927 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 13:01:52.718253  270927 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 13:01:52.718266  270927 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 13:01:52.718281  270927 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 13:01:52.718290  270927 command_runner.go:130] > # log_size_max = -1
	I0729 13:01:52.718304  270927 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 13:01:52.718313  270927 command_runner.go:130] > # log_to_journald = false
	I0729 13:01:52.718329  270927 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 13:01:52.718340  270927 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 13:01:52.718353  270927 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 13:01:52.718365  270927 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 13:01:52.718376  270927 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 13:01:52.718385  270927 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 13:01:52.718399  270927 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 13:01:52.718408  270927 command_runner.go:130] > # read_only = false
	I0729 13:01:52.718417  270927 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 13:01:52.718432  270927 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 13:01:52.718439  270927 command_runner.go:130] > # live configuration reload.
	I0729 13:01:52.718445  270927 command_runner.go:130] > # log_level = "info"
	I0729 13:01:52.718453  270927 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 13:01:52.718462  270927 command_runner.go:130] > # This option supports live configuration reload.
	I0729 13:01:52.718467  270927 command_runner.go:130] > # log_filter = ""
	I0729 13:01:52.718480  270927 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 13:01:52.718489  270927 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 13:01:52.718498  270927 command_runner.go:130] > # separated by comma.
	I0729 13:01:52.718508  270927 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 13:01:52.718517  270927 command_runner.go:130] > # uid_mappings = ""
	I0729 13:01:52.718524  270927 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 13:01:52.718535  270927 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 13:01:52.718541  270927 command_runner.go:130] > # separated by comma.
	I0729 13:01:52.718551  270927 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 13:01:52.718559  270927 command_runner.go:130] > # gid_mappings = ""
	I0729 13:01:52.718568  270927 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 13:01:52.718579  270927 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 13:01:52.718589  270927 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 13:01:52.718602  270927 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 13:01:52.718611  270927 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 13:01:52.718620  270927 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 13:01:52.718631  270927 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 13:01:52.718642  270927 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 13:01:52.718656  270927 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 13:01:52.718665  270927 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 13:01:52.718675  270927 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 13:01:52.718687  270927 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 13:01:52.718701  270927 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 13:01:52.718711  270927 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 13:01:52.718719  270927 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 13:01:52.718735  270927 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 13:01:52.718745  270927 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 13:01:52.718755  270927 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 13:01:52.718768  270927 command_runner.go:130] > drop_infra_ctr = false
	I0729 13:01:52.718779  270927 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 13:01:52.718795  270927 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 13:01:52.718808  270927 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 13:01:52.718818  270927 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 13:01:52.718829  270927 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 13:01:52.718841  270927 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 13:01:52.718849  270927 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 13:01:52.718860  270927 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 13:01:52.718866  270927 command_runner.go:130] > # shared_cpuset = ""
	I0729 13:01:52.718879  270927 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 13:01:52.718889  270927 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 13:01:52.718899  270927 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 13:01:52.718910  270927 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 13:01:52.718919  270927 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 13:01:52.718933  270927 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 13:01:52.718965  270927 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 13:01:52.718973  270927 command_runner.go:130] > # enable_criu_support = false
	I0729 13:01:52.718980  270927 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 13:01:52.718990  270927 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 13:01:52.719008  270927 command_runner.go:130] > # enable_pod_events = false
	I0729 13:01:52.719020  270927 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 13:01:52.719029  270927 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 13:01:52.719041  270927 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 13:01:52.719049  270927 command_runner.go:130] > # default_runtime = "runc"
	I0729 13:01:52.719058  270927 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 13:01:52.719073  270927 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 13:01:52.719089  270927 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 13:01:52.719100  270927 command_runner.go:130] > # creation as a file is not desired either.
	I0729 13:01:52.719113  270927 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 13:01:52.719124  270927 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 13:01:52.719132  270927 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 13:01:52.719139  270927 command_runner.go:130] > # ]
	I0729 13:01:52.719147  270927 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 13:01:52.719158  270927 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 13:01:52.719172  270927 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 13:01:52.719183  270927 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 13:01:52.719190  270927 command_runner.go:130] > #
	I0729 13:01:52.719198  270927 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 13:01:52.719208  270927 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 13:01:52.719234  270927 command_runner.go:130] > # runtime_type = "oci"
	I0729 13:01:52.719244  270927 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 13:01:52.719251  270927 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 13:01:52.719260  270927 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 13:01:52.719270  270927 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 13:01:52.719278  270927 command_runner.go:130] > # monitor_env = []
	I0729 13:01:52.719289  270927 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 13:01:52.719299  270927 command_runner.go:130] > # allowed_annotations = []
	I0729 13:01:52.719309  270927 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 13:01:52.719317  270927 command_runner.go:130] > # Where:
	I0729 13:01:52.719325  270927 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 13:01:52.719338  270927 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 13:01:52.719347  270927 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 13:01:52.719359  270927 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 13:01:52.719368  270927 command_runner.go:130] > #   in $PATH.
	I0729 13:01:52.719377  270927 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 13:01:52.719387  270927 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 13:01:52.719400  270927 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 13:01:52.719407  270927 command_runner.go:130] > #   state.
	I0729 13:01:52.719419  270927 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 13:01:52.719430  270927 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 13:01:52.719440  270927 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 13:01:52.719450  270927 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 13:01:52.719463  270927 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 13:01:52.719475  270927 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 13:01:52.719485  270927 command_runner.go:130] > #   The currently recognized values are:
	I0729 13:01:52.719498  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 13:01:52.719511  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 13:01:52.719523  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 13:01:52.719535  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 13:01:52.719548  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 13:01:52.719563  270927 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 13:01:52.719575  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 13:01:52.719587  270927 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 13:01:52.719600  270927 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 13:01:52.719612  270927 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 13:01:52.719620  270927 command_runner.go:130] > #   deprecated option "conmon".
	I0729 13:01:52.719629  270927 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 13:01:52.719636  270927 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 13:01:52.719646  270927 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 13:01:52.719653  270927 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 13:01:52.719659  270927 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 13:01:52.719667  270927 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 13:01:52.719673  270927 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 13:01:52.719680  270927 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 13:01:52.719683  270927 command_runner.go:130] > #
	I0729 13:01:52.719688  270927 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 13:01:52.719693  270927 command_runner.go:130] > #
	I0729 13:01:52.719699  270927 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 13:01:52.719709  270927 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 13:01:52.719714  270927 command_runner.go:130] > #
	I0729 13:01:52.719720  270927 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 13:01:52.719727  270927 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 13:01:52.719733  270927 command_runner.go:130] > #
	I0729 13:01:52.719739  270927 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 13:01:52.719744  270927 command_runner.go:130] > # feature.
	I0729 13:01:52.719748  270927 command_runner.go:130] > #
	I0729 13:01:52.719757  270927 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 13:01:52.719765  270927 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 13:01:52.719771  270927 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 13:01:52.719779  270927 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 13:01:52.719785  270927 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 13:01:52.719791  270927 command_runner.go:130] > #
	I0729 13:01:52.719797  270927 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 13:01:52.719805  270927 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 13:01:52.719811  270927 command_runner.go:130] > #
	I0729 13:01:52.719817  270927 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 13:01:52.719826  270927 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 13:01:52.719832  270927 command_runner.go:130] > #
	I0729 13:01:52.719838  270927 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 13:01:52.719845  270927 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 13:01:52.719851  270927 command_runner.go:130] > # limitation.
	I0729 13:01:52.719855  270927 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 13:01:52.719861  270927 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 13:01:52.719865  270927 command_runner.go:130] > runtime_type = "oci"
	I0729 13:01:52.719869  270927 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 13:01:52.719876  270927 command_runner.go:130] > runtime_config_path = ""
	I0729 13:01:52.719882  270927 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 13:01:52.719886  270927 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 13:01:52.719892  270927 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 13:01:52.719896  270927 command_runner.go:130] > monitor_env = [
	I0729 13:01:52.719904  270927 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 13:01:52.719909  270927 command_runner.go:130] > ]
	I0729 13:01:52.719913  270927 command_runner.go:130] > privileged_without_host_devices = false
	I0729 13:01:52.719921  270927 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 13:01:52.719929  270927 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 13:01:52.719935  270927 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 13:01:52.719944  270927 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 13:01:52.719954  270927 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 13:01:52.719959  270927 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 13:01:52.719969  270927 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 13:01:52.719979  270927 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 13:01:52.719986  270927 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 13:01:52.719996  270927 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 13:01:52.720006  270927 command_runner.go:130] > # Example:
	I0729 13:01:52.720010  270927 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 13:01:52.720015  270927 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 13:01:52.720019  270927 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 13:01:52.720023  270927 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 13:01:52.720026  270927 command_runner.go:130] > # cpuset = 0
	I0729 13:01:52.720030  270927 command_runner.go:130] > # cpushares = "0-1"
	I0729 13:01:52.720033  270927 command_runner.go:130] > # Where:
	I0729 13:01:52.720038  270927 command_runner.go:130] > # The workload name is workload-type.
	I0729 13:01:52.720045  270927 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 13:01:52.720050  270927 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 13:01:52.720055  270927 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 13:01:52.720062  270927 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 13:01:52.720067  270927 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 13:01:52.720072  270927 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 13:01:52.720078  270927 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 13:01:52.720082  270927 command_runner.go:130] > # Default value is set to true
	I0729 13:01:52.720086  270927 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 13:01:52.720091  270927 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 13:01:52.720095  270927 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 13:01:52.720099  270927 command_runner.go:130] > # Default value is set to 'false'
	I0729 13:01:52.720103  270927 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 13:01:52.720109  270927 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 13:01:52.720112  270927 command_runner.go:130] > #
	I0729 13:01:52.720117  270927 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 13:01:52.720124  270927 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 13:01:52.720130  270927 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 13:01:52.720136  270927 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 13:01:52.720141  270927 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 13:01:52.720144  270927 command_runner.go:130] > [crio.image]
	I0729 13:01:52.720149  270927 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 13:01:52.720155  270927 command_runner.go:130] > # default_transport = "docker://"
	I0729 13:01:52.720160  270927 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 13:01:52.720166  270927 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 13:01:52.720169  270927 command_runner.go:130] > # global_auth_file = ""
	I0729 13:01:52.720174  270927 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 13:01:52.720178  270927 command_runner.go:130] > # This option supports live configuration reload.
	I0729 13:01:52.720182  270927 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 13:01:52.720188  270927 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 13:01:52.720197  270927 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 13:01:52.720201  270927 command_runner.go:130] > # This option supports live configuration reload.
	I0729 13:01:52.720205  270927 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 13:01:52.720211  270927 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 13:01:52.720219  270927 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 13:01:52.720225  270927 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 13:01:52.720234  270927 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 13:01:52.720240  270927 command_runner.go:130] > # pause_command = "/pause"
	I0729 13:01:52.720246  270927 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 13:01:52.720253  270927 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 13:01:52.720261  270927 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 13:01:52.720267  270927 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 13:01:52.720274  270927 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 13:01:52.720280  270927 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 13:01:52.720286  270927 command_runner.go:130] > # pinned_images = [
	I0729 13:01:52.720290  270927 command_runner.go:130] > # ]
	I0729 13:01:52.720297  270927 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 13:01:52.720305  270927 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 13:01:52.720313  270927 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 13:01:52.720321  270927 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 13:01:52.720327  270927 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 13:01:52.720333  270927 command_runner.go:130] > # signature_policy = ""
	I0729 13:01:52.720339  270927 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 13:01:52.720347  270927 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 13:01:52.720355  270927 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 13:01:52.720361  270927 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 13:01:52.720369  270927 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 13:01:52.720373  270927 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 13:01:52.720381  270927 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 13:01:52.720389  270927 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 13:01:52.720393  270927 command_runner.go:130] > # changing them here.
	I0729 13:01:52.720399  270927 command_runner.go:130] > # insecure_registries = [
	I0729 13:01:52.720402  270927 command_runner.go:130] > # ]
	I0729 13:01:52.720408  270927 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 13:01:52.720415  270927 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 13:01:52.720419  270927 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 13:01:52.720426  270927 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 13:01:52.720430  270927 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 13:01:52.720438  270927 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 13:01:52.720446  270927 command_runner.go:130] > # CNI plugins.
	I0729 13:01:52.720453  270927 command_runner.go:130] > [crio.network]
	I0729 13:01:52.720465  270927 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 13:01:52.720477  270927 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 13:01:52.720486  270927 command_runner.go:130] > # cni_default_network = ""
	I0729 13:01:52.720496  270927 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 13:01:52.720506  270927 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 13:01:52.720518  270927 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 13:01:52.720527  270927 command_runner.go:130] > # plugin_dirs = [
	I0729 13:01:52.720533  270927 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 13:01:52.720541  270927 command_runner.go:130] > # ]
	I0729 13:01:52.720549  270927 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 13:01:52.720558  270927 command_runner.go:130] > [crio.metrics]
	I0729 13:01:52.720565  270927 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 13:01:52.720574  270927 command_runner.go:130] > enable_metrics = true
	I0729 13:01:52.720582  270927 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 13:01:52.720591  270927 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 13:01:52.720602  270927 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 13:01:52.720615  270927 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 13:01:52.720626  270927 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 13:01:52.720635  270927 command_runner.go:130] > # metrics_collectors = [
	I0729 13:01:52.720644  270927 command_runner.go:130] > # 	"operations",
	I0729 13:01:52.720652  270927 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 13:01:52.720662  270927 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 13:01:52.720668  270927 command_runner.go:130] > # 	"operations_errors",
	I0729 13:01:52.720677  270927 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 13:01:52.720682  270927 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 13:01:52.720689  270927 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 13:01:52.720693  270927 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 13:01:52.720700  270927 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 13:01:52.720704  270927 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 13:01:52.720710  270927 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 13:01:52.720715  270927 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 13:01:52.720721  270927 command_runner.go:130] > # 	"containers_oom_total",
	I0729 13:01:52.720727  270927 command_runner.go:130] > # 	"containers_oom",
	I0729 13:01:52.720733  270927 command_runner.go:130] > # 	"processes_defunct",
	I0729 13:01:52.720737  270927 command_runner.go:130] > # 	"operations_total",
	I0729 13:01:52.720745  270927 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 13:01:52.720753  270927 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 13:01:52.720761  270927 command_runner.go:130] > # 	"operations_errors_total",
	I0729 13:01:52.720766  270927 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 13:01:52.720773  270927 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 13:01:52.720779  270927 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 13:01:52.720789  270927 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 13:01:52.720808  270927 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 13:01:52.720818  270927 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 13:01:52.720825  270927 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 13:01:52.720834  270927 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 13:01:52.720841  270927 command_runner.go:130] > # ]
	I0729 13:01:52.720851  270927 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 13:01:52.720861  270927 command_runner.go:130] > # metrics_port = 9090
	I0729 13:01:52.720868  270927 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 13:01:52.720877  270927 command_runner.go:130] > # metrics_socket = ""
	I0729 13:01:52.720885  270927 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 13:01:52.720897  270927 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 13:01:52.720908  270927 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 13:01:52.720915  270927 command_runner.go:130] > # certificate on any modification event.
	I0729 13:01:52.720919  270927 command_runner.go:130] > # metrics_cert = ""
	I0729 13:01:52.720926  270927 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 13:01:52.720931  270927 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 13:01:52.720937  270927 command_runner.go:130] > # metrics_key = ""
	I0729 13:01:52.720943  270927 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 13:01:52.720949  270927 command_runner.go:130] > [crio.tracing]
	I0729 13:01:52.720954  270927 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 13:01:52.720961  270927 command_runner.go:130] > # enable_tracing = false
	I0729 13:01:52.720966  270927 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 13:01:52.720972  270927 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 13:01:52.720978  270927 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 13:01:52.720985  270927 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 13:01:52.720989  270927 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 13:01:52.720994  270927 command_runner.go:130] > [crio.nri]
	I0729 13:01:52.721002  270927 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 13:01:52.721008  270927 command_runner.go:130] > # enable_nri = false
	I0729 13:01:52.721012  270927 command_runner.go:130] > # NRI socket to listen on.
	I0729 13:01:52.721017  270927 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 13:01:52.721022  270927 command_runner.go:130] > # NRI plugin directory to use.
	I0729 13:01:52.721029  270927 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 13:01:52.721034  270927 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 13:01:52.721040  270927 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 13:01:52.721045  270927 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 13:01:52.721050  270927 command_runner.go:130] > # nri_disable_connections = false
	I0729 13:01:52.721057  270927 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 13:01:52.721062  270927 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 13:01:52.721069  270927 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 13:01:52.721073  270927 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 13:01:52.721081  270927 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 13:01:52.721085  270927 command_runner.go:130] > [crio.stats]
	I0729 13:01:52.721092  270927 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 13:01:52.721098  270927 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 13:01:52.721104  270927 command_runner.go:130] > # stats_collection_period = 0
	I0729 13:01:52.721124  270927 command_runner.go:130] ! time="2024-07-29 13:01:52.689369031Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 13:01:52.721137  270927 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 13:01:52.721254  270927 cni.go:84] Creating CNI manager for ""
	I0729 13:01:52.721264  270927 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 13:01:52.721274  270927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:01:52.721295  270927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-786745 NodeName:multinode-786745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:01:52.721443  270927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-786745"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:01:52.721516  270927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:01:52.731983  270927 command_runner.go:130] > kubeadm
	I0729 13:01:52.732010  270927 command_runner.go:130] > kubectl
	I0729 13:01:52.732017  270927 command_runner.go:130] > kubelet
	I0729 13:01:52.732049  270927 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:01:52.732097  270927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:01:52.742110  270927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0729 13:01:52.758435  270927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:01:52.774418  270927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 13:01:52.791581  270927 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0729 13:01:52.795191  270927 command_runner.go:130] > 192.168.39.10	control-plane.minikube.internal
	I0729 13:01:52.795325  270927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:01:52.929313  270927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:01:52.944697  270927 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745 for IP: 192.168.39.10
	I0729 13:01:52.944723  270927 certs.go:194] generating shared ca certs ...
	I0729 13:01:52.944745  270927 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:01:52.944941  270927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:01:52.945007  270927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:01:52.945023  270927 certs.go:256] generating profile certs ...
	I0729 13:01:52.945113  270927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/client.key
	I0729 13:01:52.945204  270927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/apiserver.key.fa4f91be
	I0729 13:01:52.945261  270927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/proxy-client.key
	I0729 13:01:52.945279  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 13:01:52.945301  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 13:01:52.945320  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 13:01:52.945337  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 13:01:52.945355  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 13:01:52.945375  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 13:01:52.945392  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 13:01:52.945410  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 13:01:52.945476  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:01:52.945514  270927 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:01:52.945529  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:01:52.945561  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:01:52.945592  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:01:52.945716  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:01:52.945832  270927 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:01:52.945879  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> /usr/share/ca-certificates/2403402.pem
	I0729 13:01:52.945901  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:01:52.945920  270927 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem -> /usr/share/ca-certificates/240340.pem
	I0729 13:01:52.946585  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:01:52.970344  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:01:52.994188  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:01:53.016466  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:01:53.039443  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:01:53.062299  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:01:53.085162  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:01:53.107944  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/multinode-786745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:01:53.130742  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:01:53.152815  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:01:53.175499  270927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:01:53.198023  270927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:01:53.214307  270927 ssh_runner.go:195] Run: openssl version
	I0729 13:01:53.219968  270927 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 13:01:53.220237  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:01:53.232158  270927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:01:53.236511  270927 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:01:53.236768  270927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:01:53.236837  270927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:01:53.242895  270927 command_runner.go:130] > 3ec20f2e
	I0729 13:01:53.243075  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:01:53.253132  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:01:53.264809  270927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:01:53.269110  270927 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:01:53.269279  270927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:01:53.269317  270927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:01:53.275002  270927 command_runner.go:130] > b5213941
	I0729 13:01:53.275054  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:01:53.285881  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:01:53.298389  270927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:01:53.303093  270927 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:01:53.303237  270927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:01:53.303281  270927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:01:53.309019  270927 command_runner.go:130] > 51391683
	I0729 13:01:53.309204  270927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:01:53.321115  270927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:01:53.325819  270927 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:01:53.325836  270927 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 13:01:53.325843  270927 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0729 13:01:53.325852  270927 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 13:01:53.325868  270927 command_runner.go:130] > Access: 2024-07-29 12:54:52.254613196 +0000
	I0729 13:01:53.325875  270927 command_runner.go:130] > Modify: 2024-07-29 12:54:52.254613196 +0000
	I0729 13:01:53.325884  270927 command_runner.go:130] > Change: 2024-07-29 12:54:52.254613196 +0000
	I0729 13:01:53.325892  270927 command_runner.go:130] >  Birth: 2024-07-29 12:54:52.254613196 +0000
	I0729 13:01:53.325958  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:01:53.331939  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.332069  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:01:53.337425  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.337660  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:01:53.343130  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.343292  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:01:53.348663  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.348836  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:01:53.354025  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.354219  270927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:01:53.359894  270927 command_runner.go:130] > Certificate will not expire
	I0729 13:01:53.360124  270927 kubeadm.go:392] StartCluster: {Name:multinode-786745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-786745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:01:53.360231  270927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:01:53.360290  270927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:01:53.406322  270927 command_runner.go:130] > 30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4
	I0729 13:01:53.406426  270927 command_runner.go:130] > 0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd
	I0729 13:01:53.406447  270927 command_runner.go:130] > 45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac
	I0729 13:01:53.406461  270927 command_runner.go:130] > 5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937
	I0729 13:01:53.406476  270927 command_runner.go:130] > ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24
	I0729 13:01:53.406487  270927 command_runner.go:130] > ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367
	I0729 13:01:53.406516  270927 command_runner.go:130] > a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae
	I0729 13:01:53.406535  270927 command_runner.go:130] > 294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb
	I0729 13:01:53.408007  270927 cri.go:89] found id: "30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4"
	I0729 13:01:53.408024  270927 cri.go:89] found id: "0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd"
	I0729 13:01:53.408028  270927 cri.go:89] found id: "45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac"
	I0729 13:01:53.408031  270927 cri.go:89] found id: "5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937"
	I0729 13:01:53.408034  270927 cri.go:89] found id: "ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24"
	I0729 13:01:53.408037  270927 cri.go:89] found id: "ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367"
	I0729 13:01:53.408039  270927 cri.go:89] found id: "a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae"
	I0729 13:01:53.408042  270927 cri.go:89] found id: "294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb"
	I0729 13:01:53.408045  270927 cri.go:89] found id: ""
	I0729 13:01:53.408091  270927 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.569829524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258366569802012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=943d02da-1493-4c46-8325-2bebc4ebf1fe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.570458329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5f60292-79c9-4904-b1d1-37a14d3fdec7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.570525490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5f60292-79c9-4904-b1d1-37a14d3fdec7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.570958278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e28cd1159490fc75c01648b6eae9216c75633a5c701d3d77024939b7c8240b1,PodSandboxId:74c91730aa10b55879b750f4ecb11dc9efabce469484a4de90d8ac5c17fcf412,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722258153481727119,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c26de213350ce97698c98f95153db9c8d52590d17fb062a1db6bedab8dc6a1c5,PodSandboxId:4e54fab8b623ed3b9bcb94c99cb10d22c8618e1c8c554132cf06c99b105505c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722258119955117994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4248afd116e8a5a6eb057e479638bb0622fe0065fbea601b4bd5ccca32a6b5fa,PodSandboxId:eefb4a0f49618a2c466e2ed90b7b564044a6b4b58b58fc87822faca89809a9df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258119894842578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169be91b864f4745a7086cbcfbd9f9370f3e2ebc05c21c569b7b8b28bf84c437,PodSandboxId:9eba997524f9863c8a2b9baf6b1e31761f414b96249cdb4baffb63d6e36c884c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258119847766821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]
string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887d0a602ebd5d66feca67342705a56d5def822e1e6fa40fd2a848c4c0c5c74c,PodSandboxId:d31205dfe2fe1e38c7c9b308ca8cc0ee2a2e9c75e552e5f2962b1ee1ccf88ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258119731052380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211077d6da221155b4786e9764d1afbe85435dcbc72bc299c48a89fcdd1834ed,PodSandboxId:fb7f85c82c68e52a57c967eb74339924fc2103a9ca10e9db3c84e9480368da59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722258115975387058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8fccb33964b6ea7f08c97c611ac1ba718022ccb8a46960c0e5bb26296b20a2,PodSandboxId:965a9c7af942e749986e39bf92989c092abeb825281088be1c74ed636aed2190,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258115956480402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cdfb5260fd6e6f2334d8cd3862186c9ea13b8641d32a6881190957442937f51,PodSandboxId:301d79b34dfa9980fd827a25a30e9b13169aa794770f60a8ce9020d8fb2f40ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722258115910274733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b482750e732af0f3bf857c13214a0d108d5793752a016c9c41d7c302a384ab,PodSandboxId:b7d5979d0ef19135ae67918afcd24acdba8fc64a12b57c66dfea58ddc0a50cb4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722258115861013206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6757ec4b5063f592b5ae623c317924f71cdaa2470b792d68365c215e53cea5,PodSandboxId:dfe50b635a76114ceb8ff141f4c7a195427d6f700e1369ff6f55954075ce19c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722257791988475390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4,PodSandboxId:6a4e7f0d2c68cb7d88248b089fda6115a1e13887a2b0b9ada4996ac6daad5830,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722257732868863687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd,PodSandboxId:20383bc60250627c4e427be9c5d3b2b89552da6301ad22a539e1b86dc2803514,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257732860743216,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.kubernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac,PodSandboxId:429ba2e901b85ee2dfecfdc21f2b87a2ad8e5d13e6d767a1785285bb4de550ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722257721151333061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937,PodSandboxId:e5cecfcbfdb2d441dba7ce4f34474a3a7807eda1be6b11befd85bf860023ac51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722257716216024056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24,PodSandboxId:6f176de5ab9356af029772e122a16737b0343c120600aadf2e83e748ab0da84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257695859781537,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367,PodSandboxId:bbc4a896a313df06c7edf92f4c415d9cb789a6abdbb5a3a37f016843d02e3c6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722257695832359038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae,PodSandboxId:dfc6102a64d120daa9ad8cddc916e1e17aabb3fdaadccb3473e7647ba2c95c82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722257695807785654,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,
},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb,PodSandboxId:18f0eeff23b41a8b11385c4d55fa4318b351ec5266175a255f7c4f96591bb746,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722257695769019622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5f60292-79c9-4904-b1d1-37a14d3fdec7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.613078778Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad25c367-4768-4e22-ad52-8ac59c75877f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.613171403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad25c367-4768-4e22-ad52-8ac59c75877f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.614554819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5080ab1-9a57-4f1c-aa2e-57d41470cbe9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.615120984Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258366615097142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5080ab1-9a57-4f1c-aa2e-57d41470cbe9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.615844859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36b90a5c-94c3-4a5b-9ab5-c898c328ac9c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.615919294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36b90a5c-94c3-4a5b-9ab5-c898c328ac9c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.616261900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e28cd1159490fc75c01648b6eae9216c75633a5c701d3d77024939b7c8240b1,PodSandboxId:74c91730aa10b55879b750f4ecb11dc9efabce469484a4de90d8ac5c17fcf412,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722258153481727119,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c26de213350ce97698c98f95153db9c8d52590d17fb062a1db6bedab8dc6a1c5,PodSandboxId:4e54fab8b623ed3b9bcb94c99cb10d22c8618e1c8c554132cf06c99b105505c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722258119955117994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4248afd116e8a5a6eb057e479638bb0622fe0065fbea601b4bd5ccca32a6b5fa,PodSandboxId:eefb4a0f49618a2c466e2ed90b7b564044a6b4b58b58fc87822faca89809a9df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258119894842578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169be91b864f4745a7086cbcfbd9f9370f3e2ebc05c21c569b7b8b28bf84c437,PodSandboxId:9eba997524f9863c8a2b9baf6b1e31761f414b96249cdb4baffb63d6e36c884c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258119847766821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]
string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887d0a602ebd5d66feca67342705a56d5def822e1e6fa40fd2a848c4c0c5c74c,PodSandboxId:d31205dfe2fe1e38c7c9b308ca8cc0ee2a2e9c75e552e5f2962b1ee1ccf88ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258119731052380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211077d6da221155b4786e9764d1afbe85435dcbc72bc299c48a89fcdd1834ed,PodSandboxId:fb7f85c82c68e52a57c967eb74339924fc2103a9ca10e9db3c84e9480368da59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722258115975387058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8fccb33964b6ea7f08c97c611ac1ba718022ccb8a46960c0e5bb26296b20a2,PodSandboxId:965a9c7af942e749986e39bf92989c092abeb825281088be1c74ed636aed2190,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258115956480402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cdfb5260fd6e6f2334d8cd3862186c9ea13b8641d32a6881190957442937f51,PodSandboxId:301d79b34dfa9980fd827a25a30e9b13169aa794770f60a8ce9020d8fb2f40ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722258115910274733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b482750e732af0f3bf857c13214a0d108d5793752a016c9c41d7c302a384ab,PodSandboxId:b7d5979d0ef19135ae67918afcd24acdba8fc64a12b57c66dfea58ddc0a50cb4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722258115861013206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6757ec4b5063f592b5ae623c317924f71cdaa2470b792d68365c215e53cea5,PodSandboxId:dfe50b635a76114ceb8ff141f4c7a195427d6f700e1369ff6f55954075ce19c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722257791988475390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4,PodSandboxId:6a4e7f0d2c68cb7d88248b089fda6115a1e13887a2b0b9ada4996ac6daad5830,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722257732868863687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd,PodSandboxId:20383bc60250627c4e427be9c5d3b2b89552da6301ad22a539e1b86dc2803514,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257732860743216,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.kubernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac,PodSandboxId:429ba2e901b85ee2dfecfdc21f2b87a2ad8e5d13e6d767a1785285bb4de550ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722257721151333061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937,PodSandboxId:e5cecfcbfdb2d441dba7ce4f34474a3a7807eda1be6b11befd85bf860023ac51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722257716216024056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24,PodSandboxId:6f176de5ab9356af029772e122a16737b0343c120600aadf2e83e748ab0da84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257695859781537,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367,PodSandboxId:bbc4a896a313df06c7edf92f4c415d9cb789a6abdbb5a3a37f016843d02e3c6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722257695832359038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae,PodSandboxId:dfc6102a64d120daa9ad8cddc916e1e17aabb3fdaadccb3473e7647ba2c95c82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722257695807785654,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,
},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb,PodSandboxId:18f0eeff23b41a8b11385c4d55fa4318b351ec5266175a255f7c4f96591bb746,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722257695769019622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36b90a5c-94c3-4a5b-9ab5-c898c328ac9c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.656253942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c69b52da-2b6e-4e9f-a85d-b5fc83791f2f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.656355846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c69b52da-2b6e-4e9f-a85d-b5fc83791f2f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.657403020Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8f5a404-bbfc-4002-9f90-17db035ca3d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.657920189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258366657890863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8f5a404-bbfc-4002-9f90-17db035ca3d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.658493201Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be237253-6f4b-4a85-b72f-b2372d7cc69c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.658573486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be237253-6f4b-4a85-b72f-b2372d7cc69c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.658985902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e28cd1159490fc75c01648b6eae9216c75633a5c701d3d77024939b7c8240b1,PodSandboxId:74c91730aa10b55879b750f4ecb11dc9efabce469484a4de90d8ac5c17fcf412,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722258153481727119,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c26de213350ce97698c98f95153db9c8d52590d17fb062a1db6bedab8dc6a1c5,PodSandboxId:4e54fab8b623ed3b9bcb94c99cb10d22c8618e1c8c554132cf06c99b105505c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722258119955117994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4248afd116e8a5a6eb057e479638bb0622fe0065fbea601b4bd5ccca32a6b5fa,PodSandboxId:eefb4a0f49618a2c466e2ed90b7b564044a6b4b58b58fc87822faca89809a9df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258119894842578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169be91b864f4745a7086cbcfbd9f9370f3e2ebc05c21c569b7b8b28bf84c437,PodSandboxId:9eba997524f9863c8a2b9baf6b1e31761f414b96249cdb4baffb63d6e36c884c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258119847766821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]
string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887d0a602ebd5d66feca67342705a56d5def822e1e6fa40fd2a848c4c0c5c74c,PodSandboxId:d31205dfe2fe1e38c7c9b308ca8cc0ee2a2e9c75e552e5f2962b1ee1ccf88ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258119731052380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211077d6da221155b4786e9764d1afbe85435dcbc72bc299c48a89fcdd1834ed,PodSandboxId:fb7f85c82c68e52a57c967eb74339924fc2103a9ca10e9db3c84e9480368da59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722258115975387058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8fccb33964b6ea7f08c97c611ac1ba718022ccb8a46960c0e5bb26296b20a2,PodSandboxId:965a9c7af942e749986e39bf92989c092abeb825281088be1c74ed636aed2190,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258115956480402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cdfb5260fd6e6f2334d8cd3862186c9ea13b8641d32a6881190957442937f51,PodSandboxId:301d79b34dfa9980fd827a25a30e9b13169aa794770f60a8ce9020d8fb2f40ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722258115910274733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b482750e732af0f3bf857c13214a0d108d5793752a016c9c41d7c302a384ab,PodSandboxId:b7d5979d0ef19135ae67918afcd24acdba8fc64a12b57c66dfea58ddc0a50cb4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722258115861013206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6757ec4b5063f592b5ae623c317924f71cdaa2470b792d68365c215e53cea5,PodSandboxId:dfe50b635a76114ceb8ff141f4c7a195427d6f700e1369ff6f55954075ce19c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722257791988475390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4,PodSandboxId:6a4e7f0d2c68cb7d88248b089fda6115a1e13887a2b0b9ada4996ac6daad5830,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722257732868863687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd,PodSandboxId:20383bc60250627c4e427be9c5d3b2b89552da6301ad22a539e1b86dc2803514,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257732860743216,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.kubernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac,PodSandboxId:429ba2e901b85ee2dfecfdc21f2b87a2ad8e5d13e6d767a1785285bb4de550ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722257721151333061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937,PodSandboxId:e5cecfcbfdb2d441dba7ce4f34474a3a7807eda1be6b11befd85bf860023ac51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722257716216024056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24,PodSandboxId:6f176de5ab9356af029772e122a16737b0343c120600aadf2e83e748ab0da84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257695859781537,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367,PodSandboxId:bbc4a896a313df06c7edf92f4c415d9cb789a6abdbb5a3a37f016843d02e3c6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722257695832359038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae,PodSandboxId:dfc6102a64d120daa9ad8cddc916e1e17aabb3fdaadccb3473e7647ba2c95c82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722257695807785654,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,
},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb,PodSandboxId:18f0eeff23b41a8b11385c4d55fa4318b351ec5266175a255f7c4f96591bb746,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722257695769019622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be237253-6f4b-4a85-b72f-b2372d7cc69c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.701455930Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ac5dd1d-691c-475a-a371-fd36e030808f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.701551062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ac5dd1d-691c-475a-a371-fd36e030808f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.703009397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1911e5f-9f46-432b-87d5-34ebe126fc33 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.703421272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258366703398409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1911e5f-9f46-432b-87d5-34ebe126fc33 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.704122532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00c2445c-f5ca-4c30-8ac0-cc43cdad0229 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.704178058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00c2445c-f5ca-4c30-8ac0-cc43cdad0229 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:06:06 multinode-786745 crio[2878]: time="2024-07-29 13:06:06.704491948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e28cd1159490fc75c01648b6eae9216c75633a5c701d3d77024939b7c8240b1,PodSandboxId:74c91730aa10b55879b750f4ecb11dc9efabce469484a4de90d8ac5c17fcf412,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722258153481727119,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c26de213350ce97698c98f95153db9c8d52590d17fb062a1db6bedab8dc6a1c5,PodSandboxId:4e54fab8b623ed3b9bcb94c99cb10d22c8618e1c8c554132cf06c99b105505c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722258119955117994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4248afd116e8a5a6eb057e479638bb0622fe0065fbea601b4bd5ccca32a6b5fa,PodSandboxId:eefb4a0f49618a2c466e2ed90b7b564044a6b4b58b58fc87822faca89809a9df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258119894842578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169be91b864f4745a7086cbcfbd9f9370f3e2ebc05c21c569b7b8b28bf84c437,PodSandboxId:9eba997524f9863c8a2b9baf6b1e31761f414b96249cdb4baffb63d6e36c884c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258119847766821,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]
string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887d0a602ebd5d66feca67342705a56d5def822e1e6fa40fd2a848c4c0c5c74c,PodSandboxId:d31205dfe2fe1e38c7c9b308ca8cc0ee2a2e9c75e552e5f2962b1ee1ccf88ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258119731052380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211077d6da221155b4786e9764d1afbe85435dcbc72bc299c48a89fcdd1834ed,PodSandboxId:fb7f85c82c68e52a57c967eb74339924fc2103a9ca10e9db3c84e9480368da59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722258115975387058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8fccb33964b6ea7f08c97c611ac1ba718022ccb8a46960c0e5bb26296b20a2,PodSandboxId:965a9c7af942e749986e39bf92989c092abeb825281088be1c74ed636aed2190,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258115956480402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cdfb5260fd6e6f2334d8cd3862186c9ea13b8641d32a6881190957442937f51,PodSandboxId:301d79b34dfa9980fd827a25a30e9b13169aa794770f60a8ce9020d8fb2f40ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722258115910274733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b482750e732af0f3bf857c13214a0d108d5793752a016c9c41d7c302a384ab,PodSandboxId:b7d5979d0ef19135ae67918afcd24acdba8fc64a12b57c66dfea58ddc0a50cb4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722258115861013206,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6757ec4b5063f592b5ae623c317924f71cdaa2470b792d68365c215e53cea5,PodSandboxId:dfe50b635a76114ceb8ff141f4c7a195427d6f700e1369ff6f55954075ce19c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722257791988475390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cmdrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f011b54-3ee4-49b8-9c78-08bff7fb60d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2ed849,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4,PodSandboxId:6a4e7f0d2c68cb7d88248b089fda6115a1e13887a2b0b9ada4996ac6daad5830,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722257732868863687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dbqpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2dfe5b-569e-43bf-bce8-933daf37c819,},Annotations:map[string]string{io.kubernetes.container.hash: 2ddecbbf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bab72befc4469a4de2b75053e84be8eab0b3e83766fc6e338e2e450781059cd,PodSandboxId:20383bc60250627c4e427be9c5d3b2b89552da6301ad22a539e1b86dc2803514,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722257732860743216,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 3c640e19-73fc-493c-812f-d519b75297e9,},Annotations:map[string]string{io.kubernetes.container.hash: 96502ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac,PodSandboxId:429ba2e901b85ee2dfecfdc21f2b87a2ad8e5d13e6d767a1785285bb4de550ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722257721151333061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wqdqp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d02d0326-e2e4-441d-84c7-f8c8f222e641,},Annotations:map[string]string{io.kubernetes.container.hash: d8784998,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937,PodSandboxId:e5cecfcbfdb2d441dba7ce4f34474a3a7807eda1be6b11befd85bf860023ac51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722257716216024056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8bkl,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: faf35352-76e1-43b1-981a-c08cdaa912c6,},Annotations:map[string]string{io.kubernetes.container.hash: bfe00701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24,PodSandboxId:6f176de5ab9356af029772e122a16737b0343c120600aadf2e83e748ab0da84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722257695859781537,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-786745,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 851b217f7d24cd2f246d40b4f80aa07f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367,PodSandboxId:bbc4a896a313df06c7edf92f4c415d9cb789a6abdbb5a3a37f016843d02e3c6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722257695832359038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ca80ef476c74bb62e711d2bb2be97d25,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae,PodSandboxId:dfc6102a64d120daa9ad8cddc916e1e17aabb3fdaadccb3473e7647ba2c95c82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722257695807785654,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c322f4568d95bf5ebc609f7313de03,
},Annotations:map[string]string{io.kubernetes.container.hash: accb4c50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb,PodSandboxId:18f0eeff23b41a8b11385c4d55fa4318b351ec5266175a255f7c4f96591bb746,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722257695769019622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-786745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c6bab34fa4fdd0e975275f31f156e7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00c2445c-f5ca-4c30-8ac0-cc43cdad0229 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8e28cd1159490       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   74c91730aa10b       busybox-fc5497c4f-cmdrr
	c26de213350ce       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   4e54fab8b623e       kindnet-wqdqp
	4248afd116e8a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   eefb4a0f49618       coredns-7db6d8ff4d-dbqpm
	169be91b864f4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   9eba997524f98       kube-proxy-x8bkl
	887d0a602ebd5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   d31205dfe2fe1       storage-provisioner
	211077d6da221       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   fb7f85c82c68e       etcd-multinode-786745
	0e8fccb33964b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   965a9c7af942e       kube-controller-manager-multinode-786745
	4cdfb5260fd6e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   301d79b34dfa9       kube-scheduler-multinode-786745
	30b482750e732       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   b7d5979d0ef19       kube-apiserver-multinode-786745
	2d6757ec4b506       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   dfe50b635a761       busybox-fc5497c4f-cmdrr
	30e55df3954ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   6a4e7f0d2c68c       coredns-7db6d8ff4d-dbqpm
	0bab72befc446       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   20383bc602506       storage-provisioner
	45f143f337828       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   429ba2e901b85       kindnet-wqdqp
	5fb78eca10406       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   e5cecfcbfdb2d       kube-proxy-x8bkl
	ff80069d557e8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   6f176de5ab935       kube-controller-manager-multinode-786745
	ad86c660fa96a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   bbc4a896a313d       kube-apiserver-multinode-786745
	a60d1b45bae61       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   dfc6102a64d12       etcd-multinode-786745
	294e87b4f8ed7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   18f0eeff23b41       kube-scheduler-multinode-786745
	
	
	==> coredns [30e55df3954ecdd69acf7cfc302ddc5669299dcb24067b857f1ccf7c777372b4] <==
	[INFO] 10.244.1.2:52917 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001900471s
	[INFO] 10.244.1.2:51016 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091406s
	[INFO] 10.244.1.2:58676 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068587s
	[INFO] 10.244.1.2:34892 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001294122s
	[INFO] 10.244.1.2:39674 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130339s
	[INFO] 10.244.1.2:44075 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065372s
	[INFO] 10.244.1.2:40152 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065131s
	[INFO] 10.244.0.3:35613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009574s
	[INFO] 10.244.0.3:44974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098718s
	[INFO] 10.244.0.3:46544 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066254s
	[INFO] 10.244.0.3:38737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071673s
	[INFO] 10.244.1.2:42977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150139s
	[INFO] 10.244.1.2:45957 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017653s
	[INFO] 10.244.1.2:57806 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112372s
	[INFO] 10.244.1.2:59253 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108422s
	[INFO] 10.244.0.3:33134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114617s
	[INFO] 10.244.0.3:48962 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000079995s
	[INFO] 10.244.0.3:46956 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093325s
	[INFO] 10.244.0.3:35569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073722s
	[INFO] 10.244.1.2:42815 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128878s
	[INFO] 10.244.1.2:40198 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000079121s
	[INFO] 10.244.1.2:42658 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119204s
	[INFO] 10.244.1.2:42495 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069487s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4248afd116e8a5a6eb057e479638bb0622fe0065fbea601b4bd5ccca32a6b5fa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32935 - 61842 "HINFO IN 2841602399000264551.6073448778032366050. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01130462s
	
	
	==> describe nodes <==
	Name:               multinode-786745
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-786745
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=multinode-786745
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_55_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:54:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-786745
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:06:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:01:59 +0000   Mon, 29 Jul 2024 12:54:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:01:59 +0000   Mon, 29 Jul 2024 12:54:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:01:59 +0000   Mon, 29 Jul 2024 12:54:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:01:59 +0000   Mon, 29 Jul 2024 12:55:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    multinode-786745
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06e9fb264e204bb9b5a3154b75b88dcf
	  System UUID:                06e9fb26-4e20-4bb9-b5a3-154b75b88dcf
	  Boot ID:                    e0f0f261-ef7f-48ba-ac73-457378c5e0ba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cmdrr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	  kube-system                 coredns-7db6d8ff4d-dbqpm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-786745                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-wqdqp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-786745             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-786745    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-x8bkl                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-786745             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node multinode-786745 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node multinode-786745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node multinode-786745 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-786745 event: Registered Node multinode-786745 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-786745 status is now: NodeReady
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node multinode-786745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node multinode-786745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node multinode-786745 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node multinode-786745 event: Registered Node multinode-786745 in Controller
	
	
	Name:               multinode-786745-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-786745-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=multinode-786745
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_02_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:02:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-786745-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:03:40 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 13:03:10 +0000   Mon, 29 Jul 2024 13:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 13:03:10 +0000   Mon, 29 Jul 2024 13:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 13:03:10 +0000   Mon, 29 Jul 2024 13:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 13:03:10 +0000   Mon, 29 Jul 2024 13:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    multinode-786745-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5de3da07252944eab9df4eb5bd47f786
	  System UUID:                5de3da07-2529-44ea-b9df-4eb5bd47f786
	  Boot ID:                    49ff78ff-6a59-463c-9087-5d32bd59581d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dbtnh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 kindnet-knz5q              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-rhx5z           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m23s                  kube-proxy       
	  Normal  Starting                 9m57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-786745-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-786745-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-786745-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m42s                  kubelet          Node multinode-786745-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m28s (x2 over 3m28s)  kubelet          Node multinode-786745-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m28s (x2 over 3m28s)  kubelet          Node multinode-786745-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m28s (x2 over 3m28s)  kubelet          Node multinode-786745-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m7s                   kubelet          Node multinode-786745-m02 status is now: NodeReady
	  Normal  NodeNotReady             106s                   node-controller  Node multinode-786745-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.053288] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056040] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.180428] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.121356] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.287823] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.166258] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +3.891371] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.060675] kauditd_printk_skb: 158 callbacks suppressed
	[Jul29 12:55] systemd-fstab-generator[1267]: Ignoring "noauto" option for root device
	[  +0.085459] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.505205] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.566089] systemd-fstab-generator[1464]: Ignoring "noauto" option for root device
	[  +5.837393] kauditd_printk_skb: 51 callbacks suppressed
	[Jul29 12:56] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 13:01] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.142358] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.169574] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +0.136009] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.272914] systemd-fstab-generator[2865]: Ignoring "noauto" option for root device
	[  +0.700761] systemd-fstab-generator[2963]: Ignoring "noauto" option for root device
	[  +2.156807] systemd-fstab-generator[3089]: Ignoring "noauto" option for root device
	[  +4.658548] kauditd_printk_skb: 184 callbacks suppressed
	[Jul29 13:02] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.480227] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[ +18.292639] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [211077d6da221155b4786e9764d1afbe85435dcbc72bc299c48a89fcdd1834ed] <==
	{"level":"info","ts":"2024-07-29T13:01:56.496856Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:01:56.500082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e switched to configuration voters=(17911497232019635470)"}
	{"level":"info","ts":"2024-07-29T13:01:56.500176Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","added-peer-id":"f8926bd555ec3d0e","added-peer-peer-urls":["https://192.168.39.10:2380"]}
	{"level":"info","ts":"2024-07-29T13:01:56.500334Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:01:56.500381Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:01:56.520731Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-29T13:01:56.520767Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-29T13:01:56.52067Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T13:01:56.539845Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f8926bd555ec3d0e","initial-advertise-peer-urls":["https://192.168.39.10:2380"],"listen-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:01:56.539919Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:01:57.419038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T13:01:57.419101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T13:01:57.41914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgPreVoteResp from f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2024-07-29T13:01:57.419154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T13:01:57.41916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-07-29T13:01:57.419204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 3"}
	{"level":"info","ts":"2024-07-29T13:01:57.419237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-07-29T13:01:57.42441Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:multinode-786745 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:01:57.424463Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:01:57.424951Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:01:57.426576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2024-07-29T13:01:57.427685Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:01:57.427716Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T13:01:57.42848Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T13:02:43.728496Z","caller":"traceutil/trace.go:171","msg":"trace[1428817440] transaction","detail":"{read_only:false; response_revision:1049; number_of_response:1; }","duration":"185.035836ms","start":"2024-07-29T13:02:43.543431Z","end":"2024-07-29T13:02:43.728467Z","steps":["trace[1428817440] 'process raft request'  (duration: 184.777152ms)"],"step_count":1}
	
	
	==> etcd [a60d1b45bae61b9c591610abc3feabe745dac18b865d9e256a54f0fd596b40ae] <==
	{"level":"info","ts":"2024-07-29T12:54:57.232651Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T12:54:57.232704Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T12:54:57.246679Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:54:57.246801Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:54:57.246845Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-07-29T12:56:04.054138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.030838ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4399613308981066646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:3d0e90fe8ee5bb95>","response":"size:41"}
	{"level":"info","ts":"2024-07-29T12:56:04.054355Z","caller":"traceutil/trace.go:171","msg":"trace[1649649370] linearizableReadLoop","detail":"{readStateIndex:468; appliedIndex:466; }","duration":"129.380389ms","start":"2024-07-29T12:56:03.924951Z","end":"2024-07-29T12:56:04.054331Z","steps":["trace[1649649370] 'read index received'  (duration: 128.868276ms)","trace[1649649370] 'applied index is now lower than readState.Index'  (duration: 511.609µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:56:04.054526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.550704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-786745-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T12:56:04.054581Z","caller":"traceutil/trace.go:171","msg":"trace[754789716] range","detail":"{range_begin:/registry/minions/multinode-786745-m02; range_end:; response_count:1; response_revision:446; }","duration":"129.624459ms","start":"2024-07-29T12:56:03.924928Z","end":"2024-07-29T12:56:04.054552Z","steps":["trace[754789716] 'agreement among raft nodes before linearized reading'  (duration: 129.510328ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:56:04.055054Z","caller":"traceutil/trace.go:171","msg":"trace[783250578] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"171.961229ms","start":"2024-07-29T12:56:03.883086Z","end":"2024-07-29T12:56:04.055047Z","steps":["trace[783250578] 'process raft request'  (duration: 171.179812ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:57:03.721915Z","caller":"traceutil/trace.go:171","msg":"trace[449441776] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"178.919054ms","start":"2024-07-29T12:57:03.54295Z","end":"2024-07-29T12:57:03.721869Z","steps":["trace[449441776] 'process raft request'  (duration: 178.87269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:57:03.722282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.701373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-07-29T12:57:03.722354Z","caller":"traceutil/trace.go:171","msg":"trace[609165710] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:586; }","duration":"182.923051ms","start":"2024-07-29T12:57:03.539414Z","end":"2024-07-29T12:57:03.722337Z","steps":["trace[609165710] 'agreement among raft nodes before linearized reading'  (duration: 182.621361ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:57:03.72192Z","caller":"traceutil/trace.go:171","msg":"trace[959648998] linearizableReadLoop","detail":"{readStateIndex:625; appliedIndex:624; }","duration":"182.396018ms","start":"2024-07-29T12:57:03.539491Z","end":"2024-07-29T12:57:03.721887Z","steps":["trace[959648998] 'read index received'  (duration: 104.695368ms)","trace[959648998] 'applied index is now lower than readState.Index'  (duration: 77.697634ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:57:03.722555Z","caller":"traceutil/trace.go:171","msg":"trace[1237263912] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"242.819304ms","start":"2024-07-29T12:57:03.479724Z","end":"2024-07-29T12:57:03.722543Z","steps":["trace[1237263912] 'process raft request'  (duration: 164.329128ms)","trace[1237263912] 'compare'  (duration: 77.637709ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T13:00:20.075444Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T13:00:20.075566Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-786745","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	{"level":"warn","ts":"2024-07-29T13:00:20.075735Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T13:00:20.075834Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T13:00:20.160504Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T13:00:20.160546Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T13:00:20.16067Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f8926bd555ec3d0e","current-leader-member-id":"f8926bd555ec3d0e"}
	{"level":"info","ts":"2024-07-29T13:00:20.163379Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-29T13:00:20.163563Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-29T13:00:20.163651Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-786745","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	
	
	==> kernel <==
	 13:06:07 up 11 min,  0 users,  load average: 0.48, 0.17, 0.10
	Linux multinode-786745 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [45f143f337828e87ac9203fe0617e9ba4140e5db0999c14e0074fefb7c218fac] <==
	I0729 12:59:32.119324       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	I0729 12:59:42.121682       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 12:59:42.121803       1 main.go:299] handling current node
	I0729 12:59:42.121831       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 12:59:42.121849       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 12:59:42.121999       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 12:59:42.122020       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	I0729 12:59:52.116045       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 12:59:52.116232       1 main.go:299] handling current node
	I0729 12:59:52.116270       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 12:59:52.116278       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 12:59:52.116572       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 12:59:52.116666       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	I0729 13:00:02.120847       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:00:02.120946       1 main.go:299] handling current node
	I0729 13:00:02.120978       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:00:02.120985       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:00:02.121127       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 13:00:02.121134       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	I0729 13:00:12.119442       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:00:12.119546       1 main.go:299] handling current node
	I0729 13:00:12.119574       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:00:12.119690       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:00:12.119882       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 13:00:12.119907       1 main.go:322] Node multinode-786745-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c26de213350ce97698c98f95153db9c8d52590d17fb062a1db6bedab8dc6a1c5] <==
	I0729 13:05:00.926726       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:05:10.934087       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:05:10.934244       1 main.go:299] handling current node
	I0729 13:05:10.934281       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:05:10.934300       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:05:20.934353       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:05:20.935332       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:05:20.935561       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:05:20.935587       1 main.go:299] handling current node
	I0729 13:05:30.926544       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:05:30.926704       1 main.go:299] handling current node
	I0729 13:05:30.926735       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:05:30.926755       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:05:40.935051       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:05:40.935197       1 main.go:299] handling current node
	I0729 13:05:40.935228       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:05:40.935247       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:05:50.934356       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:05:50.934403       1 main.go:299] handling current node
	I0729 13:05:50.934424       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:05:50.934430       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:06:00.926823       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0729 13:06:00.926922       1 main.go:322] Node multinode-786745-m02 has CIDR [10.244.1.0/24] 
	I0729 13:06:00.927049       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0729 13:06:00.927083       1 main.go:299] handling current node
	
	
	==> kube-apiserver [30b482750e732af0f3bf857c13214a0d108d5793752a016c9c41d7c302a384ab] <==
	I0729 13:01:58.846394       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 13:01:58.846491       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 13:01:58.851628       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 13:01:58.851690       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 13:01:58.852586       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 13:01:58.852697       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 13:01:58.853568       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 13:01:58.854260       1 aggregator.go:165] initial CRD sync complete...
	I0729 13:01:58.854302       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 13:01:58.854309       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 13:01:58.854324       1 cache.go:39] Caches are synced for autoregister controller
	I0729 13:01:58.879721       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 13:01:58.883503       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 13:01:58.896215       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:01:58.896264       1 policy_source.go:224] refreshing policies
	E0729 13:01:58.898089       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 13:01:58.968553       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 13:01:59.760432       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 13:02:01.194950       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 13:02:01.316691       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 13:02:01.327542       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 13:02:01.421927       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 13:02:01.434466       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 13:02:11.620105       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 13:02:11.669883       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ad86c660fa96a1063950a1071609f48d08ee8e215e9b0e3919b518af91c34367] <==
	I0729 12:55:01.615880       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 12:55:14.520306       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 12:55:15.539086       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0729 12:56:33.520068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56658: use of closed network connection
	E0729 12:56:33.700447       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56666: use of closed network connection
	E0729 12:56:33.900929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56692: use of closed network connection
	E0729 12:56:34.079853       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56712: use of closed network connection
	E0729 12:56:34.252068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56734: use of closed network connection
	E0729 12:56:34.418970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56752: use of closed network connection
	E0729 12:56:34.701327       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56770: use of closed network connection
	E0729 12:56:34.865727       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56790: use of closed network connection
	E0729 12:56:35.035535       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56796: use of closed network connection
	E0729 12:56:35.211675       1 conn.go:339] Error on socket receive: read tcp 192.168.39.10:8443->192.168.39.1:56812: use of closed network connection
	I0729 13:00:20.080570       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0729 13:00:20.088250       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.088453       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093336       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093407       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093451       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093480       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093516       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093559       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.093760       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:00:20.097759       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0729 13:00:20.097370       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [0e8fccb33964b6ea7f08c97c611ac1ba718022ccb8a46960c0e5bb26296b20a2] <==
	I0729 13:02:39.474115       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-786745-m02" podCIDRs=["10.244.1.0/24"]
	I0729 13:02:41.360161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.856µs"
	I0729 13:02:41.399693       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.262µs"
	I0729 13:02:41.411916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.988µs"
	I0729 13:02:41.436580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.105µs"
	I0729 13:02:41.445237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.069µs"
	I0729 13:02:41.449426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.871µs"
	I0729 13:02:42.059687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.693µs"
	I0729 13:03:00.484535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 13:03:00.503535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.134µs"
	I0729 13:03:00.518901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.755µs"
	I0729 13:03:04.759658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.854851ms"
	I0729 13:03:04.760094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.636µs"
	I0729 13:03:18.660877       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 13:03:19.713755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 13:03:19.713882       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-786745-m03\" does not exist"
	I0729 13:03:19.731155       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-786745-m03" podCIDRs=["10.244.2.0/24"]
	I0729 13:03:40.002219       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 13:03:45.387574       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 13:04:21.596417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.66318ms"
	I0729 13:04:21.596754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.523µs"
	I0729 13:04:51.506264       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-hsvcn"
	I0729 13:04:51.535020       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-hsvcn"
	I0729 13:04:51.535071       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-9rz9s"
	I0729 13:04:51.557144       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-9rz9s"
	
	
	==> kube-controller-manager [ff80069d557e82a2fbe110645eb8bb1785280b8c19120b1ebfdc90ef5028ca24] <==
	I0729 12:56:04.056361       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-786745-m02\" does not exist"
	I0729 12:56:04.095238       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-786745-m02" podCIDRs=["10.244.1.0/24"]
	I0729 12:56:04.519345       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-786745-m02"
	I0729 12:56:25.412446       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 12:56:27.724938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.214135ms"
	I0729 12:56:27.738140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.065431ms"
	I0729 12:56:27.738559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.031µs"
	I0729 12:56:27.772358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.012µs"
	I0729 12:56:32.884300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.751328ms"
	I0729 12:56:32.884642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.615µs"
	I0729 12:56:32.991003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.579306ms"
	I0729 12:56:32.991183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.964µs"
	I0729 12:57:03.729752       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 12:57:03.730183       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-786745-m03\" does not exist"
	I0729 12:57:03.765021       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-786745-m03" podCIDRs=["10.244.2.0/24"]
	I0729 12:57:04.540721       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-786745-m03"
	I0729 12:57:24.915754       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m03"
	I0729 12:57:53.230128       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 12:57:54.308362       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 12:57:54.309427       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-786745-m03\" does not exist"
	I0729 12:57:54.326680       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-786745-m03" podCIDRs=["10.244.3.0/24"]
	I0729 12:58:14.619344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m02"
	I0729 12:58:59.594058       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-786745-m03"
	I0729 12:58:59.644580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.089636ms"
	I0729 12:58:59.644952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.165µs"
	
	
	==> kube-proxy [169be91b864f4745a7086cbcfbd9f9370f3e2ebc05c21c569b7b8b28bf84c437] <==
	I0729 13:02:00.161092       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:02:00.176961       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	I0729 13:02:00.240795       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:02:00.240857       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:02:00.240875       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:02:00.248727       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:02:00.248977       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:02:00.249043       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:02:00.250978       1 config.go:192] "Starting service config controller"
	I0729 13:02:00.251009       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:02:00.251030       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:02:00.251034       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:02:00.251406       1 config.go:319] "Starting node config controller"
	I0729 13:02:00.251435       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:02:00.352177       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:02:00.352284       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:02:00.353099       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5fb78eca10406c47f89e5fa44e1e216424c6e1e0b4814a94d95192a5373a6937] <==
	I0729 12:55:16.511434       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:55:16.538143       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	I0729 12:55:16.575712       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:55:16.575751       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:55:16.575767       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:55:16.579161       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:55:16.579851       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:55:16.580086       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:55:16.582244       1 config.go:192] "Starting service config controller"
	I0729 12:55:16.582708       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:55:16.582768       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:55:16.582787       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:55:16.583903       1 config.go:319] "Starting node config controller"
	I0729 12:55:16.592681       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:55:16.683699       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 12:55:16.683854       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:55:16.698102       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [294e87b4f8ed7e791a8372cc54f185076d5e60c4692bc96dbd88660f83193fdb] <==
	W0729 12:54:59.589893       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 12:54:59.589945       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:54:59.601158       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 12:54:59.601206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 12:54:59.635081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 12:54:59.635131       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 12:54:59.642276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 12:54:59.642327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 12:54:59.644444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 12:54:59.644466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 12:54:59.658327       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 12:54:59.658368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 12:54:59.662789       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 12:54:59.662833       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 12:54:59.675446       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 12:54:59.675498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 12:54:59.755210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 12:54:59.755267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 12:54:59.800707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 12:54:59.800838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0729 12:55:01.748195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 13:00:20.071972       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 13:00:20.072084       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 13:00:20.072327       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 13:00:20.072768       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4cdfb5260fd6e6f2334d8cd3862186c9ea13b8641d32a6881190957442937f51] <==
	I0729 13:01:56.659753       1 serving.go:380] Generated self-signed cert in-memory
	W0729 13:01:58.824404       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 13:01:58.824507       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 13:01:58.824518       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 13:01:58.824547       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 13:01:58.867434       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 13:01:58.867476       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:01:58.871086       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 13:01:58.871240       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 13:01:58.871275       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 13:01:58.871305       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 13:01:58.972087       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.247212    3096 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d02d0326-e2e4-441d-84c7-f8c8f222e641-cni-cfg\") pod \"kindnet-wqdqp\" (UID: \"d02d0326-e2e4-441d-84c7-f8c8f222e641\") " pod="kube-system/kindnet-wqdqp"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.247228    3096 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d02d0326-e2e4-441d-84c7-f8c8f222e641-lib-modules\") pod \"kindnet-wqdqp\" (UID: \"d02d0326-e2e4-441d-84c7-f8c8f222e641\") " pod="kube-system/kindnet-wqdqp"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.247243    3096 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d02d0326-e2e4-441d-84c7-f8c8f222e641-xtables-lock\") pod \"kindnet-wqdqp\" (UID: \"d02d0326-e2e4-441d-84c7-f8c8f222e641\") " pod="kube-system/kindnet-wqdqp"
	Jul 29 13:01:59 multinode-786745 kubelet[3096]: I0729 13:01:59.247310    3096 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/faf35352-76e1-43b1-981a-c08cdaa912c6-lib-modules\") pod \"kube-proxy-x8bkl\" (UID: \"faf35352-76e1-43b1-981a-c08cdaa912c6\") " pod="kube-system/kube-proxy-x8bkl"
	Jul 29 13:02:01 multinode-786745 kubelet[3096]: I0729 13:02:01.422494    3096 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 13:02:55 multinode-786745 kubelet[3096]: E0729 13:02:55.270392    3096 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:02:55 multinode-786745 kubelet[3096]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:02:55 multinode-786745 kubelet[3096]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:02:55 multinode-786745 kubelet[3096]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:02:55 multinode-786745 kubelet[3096]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:03:55 multinode-786745 kubelet[3096]: E0729 13:03:55.270735    3096 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:03:55 multinode-786745 kubelet[3096]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:03:55 multinode-786745 kubelet[3096]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:03:55 multinode-786745 kubelet[3096]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:03:55 multinode-786745 kubelet[3096]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:04:55 multinode-786745 kubelet[3096]: E0729 13:04:55.270345    3096 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:04:55 multinode-786745 kubelet[3096]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:04:55 multinode-786745 kubelet[3096]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:04:55 multinode-786745 kubelet[3096]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:04:55 multinode-786745 kubelet[3096]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:05:55 multinode-786745 kubelet[3096]: E0729 13:05:55.269805    3096 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:05:55 multinode-786745 kubelet[3096]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:05:55 multinode-786745 kubelet[3096]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:05:55 multinode-786745 kubelet[3096]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:05:55 multinode-786745 kubelet[3096]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:06:06.291967  272883 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19341-233093/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-786745 -n multinode-786745
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-786745 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                    
x
+
TestPreload (312.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-695254 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0729 13:12:18.313697  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-695254 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m44.780545153s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-695254 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-695254 image pull gcr.io/k8s-minikube/busybox: (3.596584684s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-695254
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-695254: (7.2640911s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-695254 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0729 13:14:10.928498  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 13:14:27.880956  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-695254 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.271150239s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-695254 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-29 13:15:02.069246827 +0000 UTC m=+4380.332601639
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-695254 -n test-preload-695254
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-695254 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-695254 logs -n 25: (1.065881792s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n multinode-786745 sudo cat                                       | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-786745-m03_multinode-786745.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-786745 cp multinode-786745-m03:/home/docker/cp-test.txt                       | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m02:/home/docker/cp-test_multinode-786745-m03_multinode-786745-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n                                                                 | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | multinode-786745-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-786745 ssh -n multinode-786745-m02 sudo cat                                   | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-786745-m03_multinode-786745-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-786745 node stop m03                                                          | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:57 UTC |
	| node    | multinode-786745 node start                                                             | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 12:57 UTC | 29 Jul 24 12:58 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-786745                                                                | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 12:58 UTC |                     |
	| stop    | -p multinode-786745                                                                     | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 12:58 UTC |                     |
	| start   | -p multinode-786745                                                                     | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 13:00 UTC | 29 Jul 24 13:03 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-786745                                                                | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 13:03 UTC |                     |
	| node    | multinode-786745 node delete                                                            | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 13:03 UTC | 29 Jul 24 13:03 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-786745 stop                                                                   | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 13:03 UTC |                     |
	| start   | -p multinode-786745                                                                     | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 13:06 UTC | 29 Jul 24 13:09 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-786745                                                                | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 13:09 UTC |                     |
	| start   | -p multinode-786745-m02                                                                 | multinode-786745-m02 | jenkins | v1.33.1 | 29 Jul 24 13:09 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-786745-m03                                                                 | multinode-786745-m03 | jenkins | v1.33.1 | 29 Jul 24 13:09 UTC | 29 Jul 24 13:09 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-786745                                                                 | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 13:09 UTC |                     |
	| delete  | -p multinode-786745-m03                                                                 | multinode-786745-m03 | jenkins | v1.33.1 | 29 Jul 24 13:09 UTC | 29 Jul 24 13:09 UTC |
	| delete  | -p multinode-786745                                                                     | multinode-786745     | jenkins | v1.33.1 | 29 Jul 24 13:09 UTC | 29 Jul 24 13:09 UTC |
	| start   | -p test-preload-695254                                                                  | test-preload-695254  | jenkins | v1.33.1 | 29 Jul 24 13:09 UTC | 29 Jul 24 13:13 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-695254 image pull                                                          | test-preload-695254  | jenkins | v1.33.1 | 29 Jul 24 13:13 UTC | 29 Jul 24 13:13 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-695254                                                                  | test-preload-695254  | jenkins | v1.33.1 | 29 Jul 24 13:13 UTC | 29 Jul 24 13:13 UTC |
	| start   | -p test-preload-695254                                                                  | test-preload-695254  | jenkins | v1.33.1 | 29 Jul 24 13:13 UTC | 29 Jul 24 13:15 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-695254 image list                                                          | test-preload-695254  | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:13:47
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:13:47.626474  275704 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:13:47.626749  275704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:13:47.626759  275704 out.go:304] Setting ErrFile to fd 2...
	I0729 13:13:47.626763  275704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:13:47.627008  275704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:13:47.627561  275704 out.go:298] Setting JSON to false
	I0729 13:13:47.628442  275704 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10571,"bootTime":1722248257,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:13:47.628500  275704 start.go:139] virtualization: kvm guest
	I0729 13:13:47.630914  275704 out.go:177] * [test-preload-695254] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:13:47.632577  275704 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:13:47.632573  275704 notify.go:220] Checking for updates...
	I0729 13:13:47.635351  275704 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:13:47.636540  275704 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:13:47.637991  275704 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:13:47.639293  275704 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:13:47.640564  275704 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:13:47.642198  275704 config.go:182] Loaded profile config "test-preload-695254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 13:13:47.642621  275704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:47.642669  275704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:47.657444  275704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44909
	I0729 13:13:47.657815  275704 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:47.658358  275704 main.go:141] libmachine: Using API Version  1
	I0729 13:13:47.658378  275704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:47.658752  275704 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:47.658935  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:13:47.660748  275704 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 13:13:47.661990  275704 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:13:47.662296  275704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:47.662345  275704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:47.676694  275704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0729 13:13:47.677147  275704 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:47.677588  275704 main.go:141] libmachine: Using API Version  1
	I0729 13:13:47.677606  275704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:47.677982  275704 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:47.678191  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:13:47.714125  275704 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:13:47.715588  275704 start.go:297] selected driver: kvm2
	I0729 13:13:47.715609  275704 start.go:901] validating driver "kvm2" against &{Name:test-preload-695254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-695254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:13:47.715765  275704 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:13:47.716502  275704 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:13:47.716594  275704 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:13:47.732646  275704 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:13:47.733084  275704 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:13:47.733118  275704 cni.go:84] Creating CNI manager for ""
	I0729 13:13:47.733128  275704 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:13:47.733204  275704 start.go:340] cluster config:
	{Name:test-preload-695254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-695254 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:13:47.733335  275704 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:13:47.735206  275704 out.go:177] * Starting "test-preload-695254" primary control-plane node in "test-preload-695254" cluster
	I0729 13:13:47.736541  275704 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 13:13:47.892324  275704 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0729 13:13:47.892365  275704 cache.go:56] Caching tarball of preloaded images
	I0729 13:13:47.892529  275704 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 13:13:47.894476  275704 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0729 13:13:47.896151  275704 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 13:13:48.052667  275704 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0729 13:14:05.664762  275704 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 13:14:05.664892  275704 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 13:14:06.533265  275704 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0729 13:14:06.533403  275704 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/config.json ...
	I0729 13:14:06.533629  275704 start.go:360] acquireMachinesLock for test-preload-695254: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:14:06.533690  275704 start.go:364] duration metric: took 40.186µs to acquireMachinesLock for "test-preload-695254"
	I0729 13:14:06.533711  275704 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:14:06.533719  275704 fix.go:54] fixHost starting: 
	I0729 13:14:06.534018  275704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:14:06.534050  275704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:14:06.548727  275704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38093
	I0729 13:14:06.549230  275704 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:14:06.549777  275704 main.go:141] libmachine: Using API Version  1
	I0729 13:14:06.549804  275704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:14:06.550166  275704 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:14:06.550391  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:14:06.550542  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetState
	I0729 13:14:06.552268  275704 fix.go:112] recreateIfNeeded on test-preload-695254: state=Stopped err=<nil>
	I0729 13:14:06.552313  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	W0729 13:14:06.552482  275704 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:14:06.555467  275704 out.go:177] * Restarting existing kvm2 VM for "test-preload-695254" ...
	I0729 13:14:06.556954  275704 main.go:141] libmachine: (test-preload-695254) Calling .Start
	I0729 13:14:06.557133  275704 main.go:141] libmachine: (test-preload-695254) Ensuring networks are active...
	I0729 13:14:06.557865  275704 main.go:141] libmachine: (test-preload-695254) Ensuring network default is active
	I0729 13:14:06.558225  275704 main.go:141] libmachine: (test-preload-695254) Ensuring network mk-test-preload-695254 is active
	I0729 13:14:06.558541  275704 main.go:141] libmachine: (test-preload-695254) Getting domain xml...
	I0729 13:14:06.559254  275704 main.go:141] libmachine: (test-preload-695254) Creating domain...
	I0729 13:14:07.736662  275704 main.go:141] libmachine: (test-preload-695254) Waiting to get IP...
	I0729 13:14:07.737488  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:07.737851  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:07.737927  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:07.737853  275805 retry.go:31] will retry after 252.144515ms: waiting for machine to come up
	I0729 13:14:07.991247  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:07.991659  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:07.991685  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:07.991611  275805 retry.go:31] will retry after 373.605729ms: waiting for machine to come up
	I0729 13:14:08.367323  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:08.367763  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:08.367788  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:08.367711  275805 retry.go:31] will retry after 452.401168ms: waiting for machine to come up
	I0729 13:14:08.821179  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:08.821450  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:08.821471  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:08.821430  275805 retry.go:31] will retry after 539.558984ms: waiting for machine to come up
	I0729 13:14:09.362162  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:09.362528  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:09.362553  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:09.362493  275805 retry.go:31] will retry after 529.539163ms: waiting for machine to come up
	I0729 13:14:09.893283  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:09.893662  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:09.893686  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:09.893609  275805 retry.go:31] will retry after 874.782871ms: waiting for machine to come up
	I0729 13:14:10.769602  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:10.770070  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:10.770098  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:10.770020  275805 retry.go:31] will retry after 977.55636ms: waiting for machine to come up
	I0729 13:14:11.749607  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:11.749962  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:11.749996  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:11.749893  275805 retry.go:31] will retry after 1.292204152s: waiting for machine to come up
	I0729 13:14:13.044494  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:13.044960  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:13.044982  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:13.044937  275805 retry.go:31] will retry after 1.295350776s: waiting for machine to come up
	I0729 13:14:14.342697  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:14.343102  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:14.343131  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:14.343060  275805 retry.go:31] will retry after 1.885861648s: waiting for machine to come up
	I0729 13:14:16.231131  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:16.231782  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:16.231819  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:16.231715  275805 retry.go:31] will retry after 2.766290602s: waiting for machine to come up
	I0729 13:14:19.001044  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:19.001455  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:19.001477  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:19.001417  275805 retry.go:31] will retry after 3.309844281s: waiting for machine to come up
	I0729 13:14:22.312731  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:22.313145  275704 main.go:141] libmachine: (test-preload-695254) DBG | unable to find current IP address of domain test-preload-695254 in network mk-test-preload-695254
	I0729 13:14:22.313183  275704 main.go:141] libmachine: (test-preload-695254) DBG | I0729 13:14:22.313072  275805 retry.go:31] will retry after 2.839713475s: waiting for machine to come up
	I0729 13:14:25.156064  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.156515  275704 main.go:141] libmachine: (test-preload-695254) Found IP for machine: 192.168.39.171
	I0729 13:14:25.156539  275704 main.go:141] libmachine: (test-preload-695254) Reserving static IP address...
	I0729 13:14:25.156552  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has current primary IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.156948  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "test-preload-695254", mac: "52:54:00:55:a0:3a", ip: "192.168.39.171"} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:25.156973  275704 main.go:141] libmachine: (test-preload-695254) Reserved static IP address: 192.168.39.171
	I0729 13:14:25.156985  275704 main.go:141] libmachine: (test-preload-695254) DBG | skip adding static IP to network mk-test-preload-695254 - found existing host DHCP lease matching {name: "test-preload-695254", mac: "52:54:00:55:a0:3a", ip: "192.168.39.171"}
	I0729 13:14:25.156994  275704 main.go:141] libmachine: (test-preload-695254) Waiting for SSH to be available...
	I0729 13:14:25.157048  275704 main.go:141] libmachine: (test-preload-695254) DBG | Getting to WaitForSSH function...
	I0729 13:14:25.159028  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.159380  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:25.159411  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.159515  275704 main.go:141] libmachine: (test-preload-695254) DBG | Using SSH client type: external
	I0729 13:14:25.159579  275704 main.go:141] libmachine: (test-preload-695254) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/test-preload-695254/id_rsa (-rw-------)
	I0729 13:14:25.159608  275704 main.go:141] libmachine: (test-preload-695254) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/test-preload-695254/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:14:25.159619  275704 main.go:141] libmachine: (test-preload-695254) DBG | About to run SSH command:
	I0729 13:14:25.159625  275704 main.go:141] libmachine: (test-preload-695254) DBG | exit 0
	I0729 13:14:25.280663  275704 main.go:141] libmachine: (test-preload-695254) DBG | SSH cmd err, output: <nil>: 
	I0729 13:14:25.281015  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetConfigRaw
	I0729 13:14:25.281681  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetIP
	I0729 13:14:25.284444  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.284755  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:25.284787  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.285026  275704 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/config.json ...
	I0729 13:14:25.285213  275704 machine.go:94] provisionDockerMachine start ...
	I0729 13:14:25.285231  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:14:25.285442  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:25.287497  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.287811  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:25.287836  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.287933  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:25.288117  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:25.288267  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:25.288381  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:25.288535  275704 main.go:141] libmachine: Using SSH client type: native
	I0729 13:14:25.288749  275704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 13:14:25.288765  275704 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:14:25.384972  275704 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:14:25.384997  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetMachineName
	I0729 13:14:25.385265  275704 buildroot.go:166] provisioning hostname "test-preload-695254"
	I0729 13:14:25.385298  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetMachineName
	I0729 13:14:25.385523  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:25.387813  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.388190  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:25.388218  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.388372  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:25.388547  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:25.388695  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:25.388826  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:25.388988  275704 main.go:141] libmachine: Using SSH client type: native
	I0729 13:14:25.389172  275704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 13:14:25.389184  275704 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-695254 && echo "test-preload-695254" | sudo tee /etc/hostname
	I0729 13:14:25.499245  275704 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-695254
	
	I0729 13:14:25.499279  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:25.501942  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.502321  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:25.502348  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.502519  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:25.502712  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:25.502891  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:25.503037  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:25.503304  275704 main.go:141] libmachine: Using SSH client type: native
	I0729 13:14:25.503462  275704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 13:14:25.503478  275704 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-695254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-695254/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-695254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:14:25.610237  275704 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:14:25.610280  275704 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:14:25.610325  275704 buildroot.go:174] setting up certificates
	I0729 13:14:25.610343  275704 provision.go:84] configureAuth start
	I0729 13:14:25.610358  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetMachineName
	I0729 13:14:25.610686  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetIP
	I0729 13:14:25.613230  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.613587  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:25.613614  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.613748  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:25.615976  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.616358  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:25.616387  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.616517  275704 provision.go:143] copyHostCerts
	I0729 13:14:25.616593  275704 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:14:25.616603  275704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:14:25.616673  275704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:14:25.616762  275704 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:14:25.616770  275704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:14:25.616810  275704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:14:25.616876  275704 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:14:25.616884  275704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:14:25.616907  275704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:14:25.616955  275704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.test-preload-695254 san=[127.0.0.1 192.168.39.171 localhost minikube test-preload-695254]
	I0729 13:14:25.714797  275704 provision.go:177] copyRemoteCerts
	I0729 13:14:25.714870  275704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:14:25.714898  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:25.717425  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.717761  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:25.717792  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.717964  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:25.718151  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:25.718289  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:25.718390  275704 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/test-preload-695254/id_rsa Username:docker}
	I0729 13:14:25.795298  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:14:25.824203  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:14:25.851106  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 13:14:25.877285  275704 provision.go:87] duration metric: took 266.931159ms to configureAuth
	I0729 13:14:25.877313  275704 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:14:25.877477  275704 config.go:182] Loaded profile config "test-preload-695254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 13:14:25.877551  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:25.880276  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.880555  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:25.880575  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:25.880871  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:25.881110  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:25.881287  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:25.881435  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:25.881608  275704 main.go:141] libmachine: Using SSH client type: native
	I0729 13:14:25.881779  275704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 13:14:25.881798  275704 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:14:26.155622  275704 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:14:26.155653  275704 machine.go:97] duration metric: took 870.427178ms to provisionDockerMachine
	I0729 13:14:26.155666  275704 start.go:293] postStartSetup for "test-preload-695254" (driver="kvm2")
	I0729 13:14:26.155677  275704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:14:26.155721  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:14:26.156036  275704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:14:26.156069  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:26.158478  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:26.158827  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:26.158848  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:26.159003  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:26.159205  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:26.159353  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:26.159472  275704 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/test-preload-695254/id_rsa Username:docker}
	I0729 13:14:26.239824  275704 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:14:26.243814  275704 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:14:26.243841  275704 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:14:26.243932  275704 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:14:26.244100  275704 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:14:26.244274  275704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:14:26.253789  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:14:26.276305  275704 start.go:296] duration metric: took 120.624248ms for postStartSetup
	I0729 13:14:26.276358  275704 fix.go:56] duration metric: took 19.742628706s for fixHost
	I0729 13:14:26.276380  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:26.278948  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:26.279348  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:26.279376  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:26.279569  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:26.279803  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:26.279959  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:26.280127  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:26.280326  275704 main.go:141] libmachine: Using SSH client type: native
	I0729 13:14:26.280491  275704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0729 13:14:26.280500  275704 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:14:26.381554  275704 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722258866.356073174
	
	I0729 13:14:26.381582  275704 fix.go:216] guest clock: 1722258866.356073174
	I0729 13:14:26.381592  275704 fix.go:229] Guest: 2024-07-29 13:14:26.356073174 +0000 UTC Remote: 2024-07-29 13:14:26.276362713 +0000 UTC m=+38.684224031 (delta=79.710461ms)
	I0729 13:14:26.381612  275704 fix.go:200] guest clock delta is within tolerance: 79.710461ms
	I0729 13:14:26.381618  275704 start.go:83] releasing machines lock for "test-preload-695254", held for 19.847917761s
	I0729 13:14:26.381638  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:14:26.381909  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetIP
	I0729 13:14:26.384397  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:26.384807  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:26.384853  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:26.384975  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:14:26.385551  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:14:26.385762  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:14:26.385860  275704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:14:26.385899  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:26.386008  275704 ssh_runner.go:195] Run: cat /version.json
	I0729 13:14:26.386038  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:26.388538  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:26.388964  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:26.388997  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:26.389018  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:26.389151  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:26.389329  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:26.389444  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:26.389474  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:26.389484  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:26.389649  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:26.389659  275704 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/test-preload-695254/id_rsa Username:docker}
	I0729 13:14:26.389804  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:26.389971  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:26.390144  275704 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/test-preload-695254/id_rsa Username:docker}
	I0729 13:14:26.462054  275704 ssh_runner.go:195] Run: systemctl --version
	I0729 13:14:26.490544  275704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:14:26.640604  275704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:14:26.646787  275704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:14:26.646869  275704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:14:26.662571  275704 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:14:26.662598  275704 start.go:495] detecting cgroup driver to use...
	I0729 13:14:26.662692  275704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:14:26.677736  275704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:14:26.691548  275704 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:14:26.691621  275704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:14:26.705117  275704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:14:26.718684  275704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:14:26.834461  275704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:14:26.975422  275704 docker.go:233] disabling docker service ...
	I0729 13:14:26.975504  275704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:14:26.989670  275704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:14:27.002655  275704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:14:27.133418  275704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:14:27.264650  275704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:14:27.277717  275704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:14:27.295745  275704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0729 13:14:27.295814  275704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:14:27.306076  275704 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:14:27.306142  275704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:14:27.318680  275704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:14:27.329082  275704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:14:27.339356  275704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:14:27.349510  275704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:14:27.359109  275704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:14:27.375507  275704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:14:27.385141  275704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:14:27.393708  275704 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:14:27.393756  275704 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:14:27.406396  275704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:14:27.415205  275704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:14:27.543736  275704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:14:27.678959  275704 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:14:27.679024  275704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:14:27.683805  275704 start.go:563] Will wait 60s for crictl version
	I0729 13:14:27.683852  275704 ssh_runner.go:195] Run: which crictl
	I0729 13:14:27.687710  275704 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:14:27.733469  275704 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:14:27.733566  275704 ssh_runner.go:195] Run: crio --version
	I0729 13:14:27.765776  275704 ssh_runner.go:195] Run: crio --version
	I0729 13:14:27.799652  275704 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0729 13:14:27.801081  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetIP
	I0729 13:14:27.803710  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:27.804085  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:27.804114  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:27.804327  275704 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:14:27.808332  275704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:14:27.820565  275704 kubeadm.go:883] updating cluster {Name:test-preload-695254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-695254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:14:27.820712  275704 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 13:14:27.820755  275704 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:14:27.854926  275704 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0729 13:14:27.854987  275704 ssh_runner.go:195] Run: which lz4
	I0729 13:14:27.858747  275704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:14:27.862681  275704 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:14:27.862705  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0729 13:14:29.365497  275704 crio.go:462] duration metric: took 1.506772686s to copy over tarball
	I0729 13:14:29.365575  275704 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:14:31.659685  275704 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.29408224s)
	I0729 13:14:31.659719  275704 crio.go:469] duration metric: took 2.294192571s to extract the tarball
	I0729 13:14:31.659729  275704 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:14:31.702142  275704 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:14:31.742891  275704 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0729 13:14:31.742919  275704 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:14:31.742994  275704 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:14:31.743011  275704 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 13:14:31.743021  275704 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 13:14:31.743045  275704 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 13:14:31.743022  275704 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 13:14:31.742994  275704 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 13:14:31.743082  275704 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 13:14:31.743000  275704 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 13:14:31.744582  275704 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 13:14:31.744572  275704 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 13:14:31.744597  275704 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 13:14:31.744575  275704 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 13:14:31.744672  275704 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 13:14:31.744583  275704 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:14:31.744593  275704 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 13:14:31.744570  275704 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 13:14:31.951206  275704 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 13:14:31.968156  275704 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0729 13:14:31.992382  275704 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0729 13:14:31.992428  275704 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0729 13:14:31.992473  275704 ssh_runner.go:195] Run: which crictl
	I0729 13:14:31.998281  275704 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0729 13:14:32.028284  275704 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0729 13:14:32.028335  275704 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 13:14:32.028379  275704 ssh_runner.go:195] Run: which crictl
	I0729 13:14:32.028379  275704 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0729 13:14:32.056454  275704 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0729 13:14:32.056503  275704 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 13:14:32.056539  275704 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0729 13:14:32.056549  275704 ssh_runner.go:195] Run: which crictl
	I0729 13:14:32.069911  275704 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 13:14:32.074242  275704 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0729 13:14:32.074313  275704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 13:14:32.085561  275704 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 13:14:32.094539  275704 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0729 13:14:32.111869  275704 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0729 13:14:32.112533  275704 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 13:14:32.112617  275704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 13:14:32.114426  275704 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 13:14:32.174978  275704 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0729 13:14:32.175013  275704 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0729 13:14:32.175020  275704 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 13:14:32.175028  275704 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 13:14:32.175068  275704 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0729 13:14:32.175070  275704 ssh_runner.go:195] Run: which crictl
	I0729 13:14:32.175155  275704 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0729 13:14:32.175189  275704 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 13:14:32.175225  275704 ssh_runner.go:195] Run: which crictl
	I0729 13:14:32.227971  275704 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0729 13:14:32.228017  275704 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 13:14:32.228063  275704 ssh_runner.go:195] Run: which crictl
	I0729 13:14:32.231345  275704 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 13:14:32.231418  275704 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0729 13:14:32.231448  275704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 13:14:32.232377  275704 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0729 13:14:32.232404  275704 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 13:14:32.232432  275704 ssh_runner.go:195] Run: which crictl
	I0729 13:14:32.938718  275704 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:14:35.079331  275704 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.904235516s)
	I0729 13:14:35.079366  275704 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0729 13:14:35.079412  275704 ssh_runner.go:235] Completed: which crictl: (2.904166949s)
	I0729 13:14:35.079469  275704 ssh_runner.go:235] Completed: which crictl: (2.904388058s)
	I0729 13:14:35.079488  275704 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 13:14:35.079403  275704 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 13:14:35.079527  275704 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0729 13:14:35.079561  275704 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 13:14:35.079608  275704 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.848141715s)
	I0729 13:14:35.079633  275704 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0729 13:14:35.079573  275704 ssh_runner.go:235] Completed: which crictl: (2.851497565s)
	I0729 13:14:35.079657  275704 ssh_runner.go:235] Completed: which crictl: (2.84721448s)
	I0729 13:14:35.079667  275704 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0729 13:14:35.079700  275704 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 13:14:35.079733  275704 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.140977864s)
	I0729 13:14:35.176042  275704 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0729 13:14:35.176126  275704 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 13:14:35.176169  275704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 13:14:35.176221  275704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 13:14:35.177522  275704 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 13:14:35.177598  275704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 13:14:35.183423  275704 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 13:14:35.183522  275704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 13:14:36.004267  275704 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0729 13:14:36.004319  275704 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 13:14:36.004320  275704 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0729 13:14:36.004361  275704 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0729 13:14:36.004386  275704 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 13:14:36.004419  275704 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0729 13:14:36.004495  275704 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0729 13:14:36.751743  275704 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0729 13:14:36.751795  275704 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 13:14:36.751861  275704 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 13:14:37.190862  275704 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0729 13:14:37.190917  275704 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 13:14:37.190979  275704 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0729 13:14:39.246497  275704 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.055486634s)
	I0729 13:14:39.246537  275704 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 13:14:39.246564  275704 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 13:14:39.246604  275704 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0729 13:14:39.684443  275704 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 13:14:39.684509  275704 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 13:14:39.684576  275704 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 13:14:40.426534  275704 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0729 13:14:40.426583  275704 cache_images.go:123] Successfully loaded all cached images
	I0729 13:14:40.426588  275704 cache_images.go:92] duration metric: took 8.683658762s to LoadCachedImages
	I0729 13:14:40.426603  275704 kubeadm.go:934] updating node { 192.168.39.171 8443 v1.24.4 crio true true} ...
	I0729 13:14:40.426747  275704 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-695254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-695254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:14:40.426825  275704 ssh_runner.go:195] Run: crio config
	I0729 13:14:40.476275  275704 cni.go:84] Creating CNI manager for ""
	I0729 13:14:40.476307  275704 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:14:40.476324  275704 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:14:40.476351  275704 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.171 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-695254 NodeName:test-preload-695254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:14:40.476518  275704 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-695254"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:14:40.476596  275704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0729 13:14:40.486643  275704 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:14:40.486712  275704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:14:40.495929  275704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0729 13:14:40.512030  275704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:14:40.527831  275704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0729 13:14:40.543990  275704 ssh_runner.go:195] Run: grep 192.168.39.171	control-plane.minikube.internal$ /etc/hosts
	I0729 13:14:40.547781  275704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:14:40.559757  275704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:14:40.671847  275704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:14:40.687441  275704 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254 for IP: 192.168.39.171
	I0729 13:14:40.687473  275704 certs.go:194] generating shared ca certs ...
	I0729 13:14:40.687495  275704 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:14:40.687710  275704 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:14:40.687776  275704 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:14:40.687798  275704 certs.go:256] generating profile certs ...
	I0729 13:14:40.687904  275704 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/client.key
	I0729 13:14:40.687997  275704 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/apiserver.key.d53fd093
	I0729 13:14:40.688047  275704 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/proxy-client.key
	I0729 13:14:40.688206  275704 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:14:40.688254  275704 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:14:40.688268  275704 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:14:40.688303  275704 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:14:40.688330  275704 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:14:40.688360  275704 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:14:40.688418  275704 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:14:40.689181  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:14:40.729923  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:14:40.762782  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:14:40.795775  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:14:40.830474  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 13:14:40.855001  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:14:40.880943  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:14:40.915440  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:14:40.938508  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:14:40.961201  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:14:40.983875  275704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:14:41.006358  275704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:14:41.022382  275704 ssh_runner.go:195] Run: openssl version
	I0729 13:14:41.028035  275704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:14:41.038443  275704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:14:41.042927  275704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:14:41.042967  275704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:14:41.048721  275704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:14:41.059143  275704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:14:41.069695  275704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:14:41.074046  275704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:14:41.074120  275704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:14:41.079626  275704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:14:41.090005  275704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:14:41.100303  275704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:14:41.104565  275704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:14:41.104614  275704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:14:41.110164  275704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:14:41.120104  275704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:14:41.124213  275704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:14:41.129940  275704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:14:41.135490  275704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:14:41.141086  275704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:14:41.146593  275704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:14:41.151928  275704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:14:41.157218  275704 kubeadm.go:392] StartCluster: {Name:test-preload-695254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-695254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:14:41.157301  275704 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:14:41.157351  275704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:14:41.195110  275704 cri.go:89] found id: ""
	I0729 13:14:41.195180  275704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:14:41.205213  275704 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:14:41.205241  275704 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:14:41.205284  275704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:14:41.214734  275704 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:14:41.215167  275704 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-695254" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:14:41.215298  275704 kubeconfig.go:62] /home/jenkins/minikube-integration/19341-233093/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-695254" cluster setting kubeconfig missing "test-preload-695254" context setting]
	I0729 13:14:41.215555  275704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:14:41.216159  275704 kapi.go:59] client config for test-preload-695254: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/client.crt", KeyFile:"/home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/client.key", CAFile:"/home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 13:14:41.216771  275704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:14:41.225743  275704 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.171
	I0729 13:14:41.225775  275704 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:14:41.225787  275704 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:14:41.225861  275704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:14:41.261149  275704 cri.go:89] found id: ""
	I0729 13:14:41.261217  275704 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:14:41.277737  275704 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:14:41.287816  275704 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:14:41.287840  275704 kubeadm.go:157] found existing configuration files:
	
	I0729 13:14:41.287907  275704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:14:41.297221  275704 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:14:41.297281  275704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:14:41.306685  275704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:14:41.316005  275704 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:14:41.316055  275704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:14:41.325964  275704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:14:41.335335  275704 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:14:41.335405  275704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:14:41.344880  275704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:14:41.354261  275704 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:14:41.354320  275704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:14:41.363796  275704 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:14:41.373486  275704 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:14:41.467124  275704 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:14:42.222269  275704 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:14:42.465994  275704 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:14:42.529520  275704 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:14:42.595098  275704 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:14:42.595186  275704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:14:43.095340  275704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:14:43.595940  275704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:14:43.643735  275704 api_server.go:72] duration metric: took 1.048636261s to wait for apiserver process to appear ...
	I0729 13:14:43.643763  275704 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:14:43.643800  275704 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0729 13:14:43.644317  275704 api_server.go:269] stopped: https://192.168.39.171:8443/healthz: Get "https://192.168.39.171:8443/healthz": dial tcp 192.168.39.171:8443: connect: connection refused
	I0729 13:14:44.143885  275704 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0729 13:14:47.435697  275704 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:14:47.435727  275704 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:14:47.435741  275704 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0729 13:14:47.452728  275704 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:14:47.452757  275704 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:14:47.644427  275704 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0729 13:14:47.649941  275704 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:14:47.649972  275704 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:14:48.144672  275704 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0729 13:14:48.153621  275704 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:14:48.153647  275704 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:14:48.643988  275704 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0729 13:14:48.650535  275704 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0729 13:14:48.657117  275704 api_server.go:141] control plane version: v1.24.4
	I0729 13:14:48.657142  275704 api_server.go:131] duration metric: took 5.013372692s to wait for apiserver health ...
	I0729 13:14:48.657151  275704 cni.go:84] Creating CNI manager for ""
	I0729 13:14:48.657159  275704 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:14:48.659110  275704 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:14:48.660467  275704 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:14:48.681372  275704 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:14:48.729412  275704 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:14:48.740261  275704 system_pods.go:59] 7 kube-system pods found
	I0729 13:14:48.740289  275704 system_pods.go:61] "coredns-6d4b75cb6d-qhzcq" [d072c450-99b9-4c62-9908-1d53ad7bedee] Running
	I0729 13:14:48.740293  275704 system_pods.go:61] "etcd-test-preload-695254" [89cc41cf-82eb-498a-b730-37fb4f089606] Running
	I0729 13:14:48.740299  275704 system_pods.go:61] "kube-apiserver-test-preload-695254" [6813cb1b-8fff-47a5-a891-66f8c6114d5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:14:48.740307  275704 system_pods.go:61] "kube-controller-manager-test-preload-695254" [3bcbca1f-f709-4bde-8940-cdf459805b50] Running
	I0729 13:14:48.740318  275704 system_pods.go:61] "kube-proxy-58nhz" [ded65fe7-9ab8-4776-a45a-56bbc9137725] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:14:48.740323  275704 system_pods.go:61] "kube-scheduler-test-preload-695254" [23aed4f1-cad4-4ca4-ab6d-6c989b6e87f6] Running
	I0729 13:14:48.740329  275704 system_pods.go:61] "storage-provisioner" [c4e3d0a7-9142-4407-a9db-e8b5e68ce450] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:14:48.740341  275704 system_pods.go:74] duration metric: took 10.905849ms to wait for pod list to return data ...
	I0729 13:14:48.740361  275704 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:14:48.743781  275704 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:14:48.743808  275704 node_conditions.go:123] node cpu capacity is 2
	I0729 13:14:48.743818  275704 node_conditions.go:105] duration metric: took 3.446902ms to run NodePressure ...
	I0729 13:14:48.743839  275704 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:14:49.020429  275704 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:14:49.026378  275704 kubeadm.go:739] kubelet initialised
	I0729 13:14:49.026402  275704 kubeadm.go:740] duration metric: took 5.937753ms waiting for restarted kubelet to initialise ...
	I0729 13:14:49.026413  275704 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:14:49.032930  275704 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-qhzcq" in "kube-system" namespace to be "Ready" ...
	I0729 13:14:49.040510  275704 pod_ready.go:97] node "test-preload-695254" hosting pod "coredns-6d4b75cb6d-qhzcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.040542  275704 pod_ready.go:81] duration metric: took 7.581145ms for pod "coredns-6d4b75cb6d-qhzcq" in "kube-system" namespace to be "Ready" ...
	E0729 13:14:49.040553  275704 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-695254" hosting pod "coredns-6d4b75cb6d-qhzcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.040562  275704 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:14:49.049069  275704 pod_ready.go:97] node "test-preload-695254" hosting pod "etcd-test-preload-695254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.049100  275704 pod_ready.go:81] duration metric: took 8.522334ms for pod "etcd-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	E0729 13:14:49.049111  275704 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-695254" hosting pod "etcd-test-preload-695254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.049123  275704 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:14:49.054726  275704 pod_ready.go:97] node "test-preload-695254" hosting pod "kube-apiserver-test-preload-695254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.054757  275704 pod_ready.go:81] duration metric: took 5.618838ms for pod "kube-apiserver-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	E0729 13:14:49.054772  275704 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-695254" hosting pod "kube-apiserver-test-preload-695254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.054783  275704 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:14:49.132307  275704 pod_ready.go:97] node "test-preload-695254" hosting pod "kube-controller-manager-test-preload-695254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.132333  275704 pod_ready.go:81] duration metric: took 77.536032ms for pod "kube-controller-manager-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	E0729 13:14:49.132343  275704 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-695254" hosting pod "kube-controller-manager-test-preload-695254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.132349  275704 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-58nhz" in "kube-system" namespace to be "Ready" ...
	I0729 13:14:49.533481  275704 pod_ready.go:97] node "test-preload-695254" hosting pod "kube-proxy-58nhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.533513  275704 pod_ready.go:81] duration metric: took 401.155846ms for pod "kube-proxy-58nhz" in "kube-system" namespace to be "Ready" ...
	E0729 13:14:49.533523  275704 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-695254" hosting pod "kube-proxy-58nhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.533530  275704 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:14:49.932781  275704 pod_ready.go:97] node "test-preload-695254" hosting pod "kube-scheduler-test-preload-695254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.932832  275704 pod_ready.go:81] duration metric: took 399.293922ms for pod "kube-scheduler-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	E0729 13:14:49.932846  275704 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-695254" hosting pod "kube-scheduler-test-preload-695254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:49.932857  275704 pod_ready.go:38] duration metric: took 906.432152ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:14:49.932885  275704 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:14:49.945038  275704 ops.go:34] apiserver oom_adj: -16
	I0729 13:14:49.945062  275704 kubeadm.go:597] duration metric: took 8.739813897s to restartPrimaryControlPlane
	I0729 13:14:49.945071  275704 kubeadm.go:394] duration metric: took 8.787858268s to StartCluster
	I0729 13:14:49.945090  275704 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:14:49.945174  275704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:14:49.945888  275704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:14:49.946145  275704 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:14:49.946221  275704 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:14:49.946322  275704 addons.go:69] Setting storage-provisioner=true in profile "test-preload-695254"
	I0729 13:14:49.946355  275704 addons.go:234] Setting addon storage-provisioner=true in "test-preload-695254"
	I0729 13:14:49.946359  275704 config.go:182] Loaded profile config "test-preload-695254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 13:14:49.946356  275704 addons.go:69] Setting default-storageclass=true in profile "test-preload-695254"
	I0729 13:14:49.946399  275704 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-695254"
	W0729 13:14:49.946368  275704 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:14:49.946490  275704 host.go:66] Checking if "test-preload-695254" exists ...
	I0729 13:14:49.946779  275704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:14:49.946782  275704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:14:49.946822  275704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:14:49.946835  275704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:14:49.947921  275704 out.go:177] * Verifying Kubernetes components...
	I0729 13:14:49.949429  275704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:14:49.961911  275704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40913
	I0729 13:14:49.962184  275704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I0729 13:14:49.962422  275704 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:14:49.962577  275704 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:14:49.962978  275704 main.go:141] libmachine: Using API Version  1
	I0729 13:14:49.962998  275704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:14:49.963089  275704 main.go:141] libmachine: Using API Version  1
	I0729 13:14:49.963112  275704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:14:49.963302  275704 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:14:49.963448  275704 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:14:49.963483  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetState
	I0729 13:14:49.963992  275704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:14:49.964034  275704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:14:49.965660  275704 kapi.go:59] client config for test-preload-695254: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/client.crt", KeyFile:"/home/jenkins/minikube-integration/19341-233093/.minikube/profiles/test-preload-695254/client.key", CAFile:"/home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 13:14:49.965883  275704 addons.go:234] Setting addon default-storageclass=true in "test-preload-695254"
	W0729 13:14:49.965897  275704 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:14:49.965920  275704 host.go:66] Checking if "test-preload-695254" exists ...
	I0729 13:14:49.966143  275704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:14:49.966171  275704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:14:49.978609  275704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41503
	I0729 13:14:49.979091  275704 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:14:49.979559  275704 main.go:141] libmachine: Using API Version  1
	I0729 13:14:49.979583  275704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:14:49.979880  275704 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:14:49.980039  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetState
	I0729 13:14:49.980590  275704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0729 13:14:49.980969  275704 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:14:49.981451  275704 main.go:141] libmachine: Using API Version  1
	I0729 13:14:49.981473  275704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:14:49.981805  275704 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:14:49.981853  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:14:49.982423  275704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:14:49.982472  275704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:14:49.983775  275704 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:14:49.985153  275704 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:14:49.985179  275704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:14:49.985200  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:49.987727  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:49.988191  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:49.988221  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:49.988353  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:49.988514  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:49.988642  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:49.988819  275704 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/test-preload-695254/id_rsa Username:docker}
	I0729 13:14:49.997665  275704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
	I0729 13:14:49.998021  275704 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:14:49.998429  275704 main.go:141] libmachine: Using API Version  1
	I0729 13:14:49.998449  275704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:14:49.998770  275704 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:14:49.998976  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetState
	I0729 13:14:50.000345  275704 main.go:141] libmachine: (test-preload-695254) Calling .DriverName
	I0729 13:14:50.000545  275704 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:14:50.000561  275704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:14:50.000578  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHHostname
	I0729 13:14:50.003094  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:50.003427  275704 main.go:141] libmachine: (test-preload-695254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a0:3a", ip: ""} in network mk-test-preload-695254: {Iface:virbr1 ExpiryTime:2024-07-29 14:14:17 +0000 UTC Type:0 Mac:52:54:00:55:a0:3a Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-695254 Clientid:01:52:54:00:55:a0:3a}
	I0729 13:14:50.003454  275704 main.go:141] libmachine: (test-preload-695254) DBG | domain test-preload-695254 has defined IP address 192.168.39.171 and MAC address 52:54:00:55:a0:3a in network mk-test-preload-695254
	I0729 13:14:50.003570  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHPort
	I0729 13:14:50.003741  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHKeyPath
	I0729 13:14:50.003906  275704 main.go:141] libmachine: (test-preload-695254) Calling .GetSSHUsername
	I0729 13:14:50.004053  275704 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/test-preload-695254/id_rsa Username:docker}
	I0729 13:14:50.150050  275704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:14:50.171478  275704 node_ready.go:35] waiting up to 6m0s for node "test-preload-695254" to be "Ready" ...
	I0729 13:14:50.283607  275704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:14:50.292565  275704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:14:51.283837  275704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.000186771s)
	I0729 13:14:51.283892  275704 main.go:141] libmachine: Making call to close driver server
	I0729 13:14:51.283903  275704 main.go:141] libmachine: (test-preload-695254) Calling .Close
	I0729 13:14:51.283905  275704 main.go:141] libmachine: Making call to close driver server
	I0729 13:14:51.283921  275704 main.go:141] libmachine: (test-preload-695254) Calling .Close
	I0729 13:14:51.284187  275704 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:14:51.284207  275704 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:14:51.284216  275704 main.go:141] libmachine: Making call to close driver server
	I0729 13:14:51.284218  275704 main.go:141] libmachine: (test-preload-695254) DBG | Closing plugin on server side
	I0729 13:14:51.284230  275704 main.go:141] libmachine: (test-preload-695254) DBG | Closing plugin on server side
	I0729 13:14:51.284190  275704 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:14:51.284254  275704 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:14:51.284261  275704 main.go:141] libmachine: Making call to close driver server
	I0729 13:14:51.284268  275704 main.go:141] libmachine: (test-preload-695254) Calling .Close
	I0729 13:14:51.284244  275704 main.go:141] libmachine: (test-preload-695254) Calling .Close
	I0729 13:14:51.284481  275704 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:14:51.284491  275704 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:14:51.284503  275704 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:14:51.284526  275704 main.go:141] libmachine: (test-preload-695254) DBG | Closing plugin on server side
	I0729 13:14:51.284494  275704 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:14:51.292541  275704 main.go:141] libmachine: Making call to close driver server
	I0729 13:14:51.292558  275704 main.go:141] libmachine: (test-preload-695254) Calling .Close
	I0729 13:14:51.292847  275704 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:14:51.292867  275704 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:14:51.292883  275704 main.go:141] libmachine: (test-preload-695254) DBG | Closing plugin on server side
	I0729 13:14:51.295015  275704 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 13:14:51.296419  275704 addons.go:510] duration metric: took 1.350208349s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 13:14:52.175104  275704 node_ready.go:53] node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:54.675399  275704 node_ready.go:53] node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:56.675744  275704 node_ready.go:53] node "test-preload-695254" has status "Ready":"False"
	I0729 13:14:58.175436  275704 node_ready.go:49] node "test-preload-695254" has status "Ready":"True"
	I0729 13:14:58.175460  275704 node_ready.go:38] duration metric: took 8.003954345s for node "test-preload-695254" to be "Ready" ...
	I0729 13:14:58.175470  275704 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:14:58.179954  275704 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-qhzcq" in "kube-system" namespace to be "Ready" ...
	I0729 13:14:58.184429  275704 pod_ready.go:92] pod "coredns-6d4b75cb6d-qhzcq" in "kube-system" namespace has status "Ready":"True"
	I0729 13:14:58.184452  275704 pod_ready.go:81] duration metric: took 4.472171ms for pod "coredns-6d4b75cb6d-qhzcq" in "kube-system" namespace to be "Ready" ...
	I0729 13:14:58.184462  275704 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:14:58.188642  275704 pod_ready.go:92] pod "etcd-test-preload-695254" in "kube-system" namespace has status "Ready":"True"
	I0729 13:14:58.188662  275704 pod_ready.go:81] duration metric: took 4.194697ms for pod "etcd-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:14:58.188674  275704 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:15:00.196255  275704 pod_ready.go:102] pod "kube-apiserver-test-preload-695254" in "kube-system" namespace has status "Ready":"False"
	I0729 13:15:00.697173  275704 pod_ready.go:92] pod "kube-apiserver-test-preload-695254" in "kube-system" namespace has status "Ready":"True"
	I0729 13:15:00.697199  275704 pod_ready.go:81] duration metric: took 2.508516924s for pod "kube-apiserver-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:15:00.697218  275704 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:15:00.716700  275704 pod_ready.go:92] pod "kube-controller-manager-test-preload-695254" in "kube-system" namespace has status "Ready":"True"
	I0729 13:15:00.716726  275704 pod_ready.go:81] duration metric: took 19.500602ms for pod "kube-controller-manager-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:15:00.716735  275704 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-58nhz" in "kube-system" namespace to be "Ready" ...
	I0729 13:15:00.727438  275704 pod_ready.go:92] pod "kube-proxy-58nhz" in "kube-system" namespace has status "Ready":"True"
	I0729 13:15:00.727460  275704 pod_ready.go:81] duration metric: took 10.719054ms for pod "kube-proxy-58nhz" in "kube-system" namespace to be "Ready" ...
	I0729 13:15:00.727470  275704 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:15:00.975618  275704 pod_ready.go:92] pod "kube-scheduler-test-preload-695254" in "kube-system" namespace has status "Ready":"True"
	I0729 13:15:00.975645  275704 pod_ready.go:81] duration metric: took 248.168997ms for pod "kube-scheduler-test-preload-695254" in "kube-system" namespace to be "Ready" ...
	I0729 13:15:00.975656  275704 pod_ready.go:38] duration metric: took 2.800177333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:15:00.975671  275704 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:15:00.975718  275704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:15:00.990462  275704 api_server.go:72] duration metric: took 11.04428006s to wait for apiserver process to appear ...
	I0729 13:15:00.990487  275704 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:15:00.990507  275704 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0729 13:15:00.995975  275704 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0729 13:15:00.996972  275704 api_server.go:141] control plane version: v1.24.4
	I0729 13:15:00.996995  275704 api_server.go:131] duration metric: took 6.500909ms to wait for apiserver health ...
	I0729 13:15:00.997005  275704 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:15:01.179356  275704 system_pods.go:59] 7 kube-system pods found
	I0729 13:15:01.179392  275704 system_pods.go:61] "coredns-6d4b75cb6d-qhzcq" [d072c450-99b9-4c62-9908-1d53ad7bedee] Running
	I0729 13:15:01.179400  275704 system_pods.go:61] "etcd-test-preload-695254" [89cc41cf-82eb-498a-b730-37fb4f089606] Running
	I0729 13:15:01.179406  275704 system_pods.go:61] "kube-apiserver-test-preload-695254" [6813cb1b-8fff-47a5-a891-66f8c6114d5c] Running
	I0729 13:15:01.179412  275704 system_pods.go:61] "kube-controller-manager-test-preload-695254" [3bcbca1f-f709-4bde-8940-cdf459805b50] Running
	I0729 13:15:01.179416  275704 system_pods.go:61] "kube-proxy-58nhz" [ded65fe7-9ab8-4776-a45a-56bbc9137725] Running
	I0729 13:15:01.179422  275704 system_pods.go:61] "kube-scheduler-test-preload-695254" [23aed4f1-cad4-4ca4-ab6d-6c989b6e87f6] Running
	I0729 13:15:01.179429  275704 system_pods.go:61] "storage-provisioner" [c4e3d0a7-9142-4407-a9db-e8b5e68ce450] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:15:01.179489  275704 system_pods.go:74] duration metric: took 182.421452ms to wait for pod list to return data ...
	I0729 13:15:01.179507  275704 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:15:01.375492  275704 default_sa.go:45] found service account: "default"
	I0729 13:15:01.375525  275704 default_sa.go:55] duration metric: took 196.01046ms for default service account to be created ...
	I0729 13:15:01.375538  275704 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:15:01.578989  275704 system_pods.go:86] 7 kube-system pods found
	I0729 13:15:01.579018  275704 system_pods.go:89] "coredns-6d4b75cb6d-qhzcq" [d072c450-99b9-4c62-9908-1d53ad7bedee] Running
	I0729 13:15:01.579023  275704 system_pods.go:89] "etcd-test-preload-695254" [89cc41cf-82eb-498a-b730-37fb4f089606] Running
	I0729 13:15:01.579028  275704 system_pods.go:89] "kube-apiserver-test-preload-695254" [6813cb1b-8fff-47a5-a891-66f8c6114d5c] Running
	I0729 13:15:01.579032  275704 system_pods.go:89] "kube-controller-manager-test-preload-695254" [3bcbca1f-f709-4bde-8940-cdf459805b50] Running
	I0729 13:15:01.579036  275704 system_pods.go:89] "kube-proxy-58nhz" [ded65fe7-9ab8-4776-a45a-56bbc9137725] Running
	I0729 13:15:01.579040  275704 system_pods.go:89] "kube-scheduler-test-preload-695254" [23aed4f1-cad4-4ca4-ab6d-6c989b6e87f6] Running
	I0729 13:15:01.579048  275704 system_pods.go:89] "storage-provisioner" [c4e3d0a7-9142-4407-a9db-e8b5e68ce450] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:15:01.579056  275704 system_pods.go:126] duration metric: took 203.51044ms to wait for k8s-apps to be running ...
	I0729 13:15:01.579066  275704 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:15:01.579110  275704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:15:01.593460  275704 system_svc.go:56] duration metric: took 14.384112ms WaitForService to wait for kubelet
	I0729 13:15:01.593489  275704 kubeadm.go:582] duration metric: took 11.64731343s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:15:01.593510  275704 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:15:01.776138  275704 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:15:01.776163  275704 node_conditions.go:123] node cpu capacity is 2
	I0729 13:15:01.776174  275704 node_conditions.go:105] duration metric: took 182.660492ms to run NodePressure ...
	I0729 13:15:01.776184  275704 start.go:241] waiting for startup goroutines ...
	I0729 13:15:01.776191  275704 start.go:246] waiting for cluster config update ...
	I0729 13:15:01.776203  275704 start.go:255] writing updated cluster config ...
	I0729 13:15:01.776490  275704 ssh_runner.go:195] Run: rm -f paused
	I0729 13:15:01.825620  275704 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0729 13:15:01.827423  275704 out.go:177] 
	W0729 13:15:01.828579  275704 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0729 13:15:01.829837  275704 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0729 13:15:01.831136  275704 out.go:177] * Done! kubectl is now configured to use "test-preload-695254" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.736570551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ea211cd-c933-448e-b43d-48f082566682 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.738057123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f1d1122-500c-4b26-8a4c-96605c123526 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.738553293Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258902738503248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f1d1122-500c-4b26-8a4c-96605c123526 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.739228334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb2dade0-f3ee-4948-9293-beded75fb33a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.739275299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb2dade0-f3ee-4948-9293-beded75fb33a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.739460930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d45cb52d39418ee6fb3399a06318dbbebc4fa313ac0389049f73ad495c082d75,PodSandboxId:215b0ddb6c4cf1a7b2563ad1f8a88812a09aa9b037cdfc8ce1b36968f8e4b8a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722258895720931823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qhzcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d072c450-99b9-4c62-9908-1d53ad7bedee,},Annotations:map[string]string{io.kubernetes.container.hash: 3807eee4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d809941d06b49bba35a366672f3eaead8d770a037f6430854872a263dd5608a,PodSandboxId:8b314c76d5297a9bae0ca50ebd1b1bb4e28eb8a392c66ea77edaabfb07d25f1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722258888758516257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: c4e3d0a7-9142-4407-a9db-e8b5e68ce450,},Annotations:map[string]string{io.kubernetes.container.hash: e7c5f20f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b3e0d12dd6e14bbcad5812dd2782bbf7397a366bceaba65ecc3b36beaa010c,PodSandboxId:988b7851890cb09458b891e49757879e744faa8e27a0ded65bf1f21c6509dbca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722258888603372369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58nhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded
65fe7-9ab8-4776-a45a-56bbc9137725,},Annotations:map[string]string{io.kubernetes.container.hash: 282dba8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e846b59fb2cd0227a53724825d2fc14c2de04d507b71e271ce36f600d285cee6,PodSandboxId:a47826487b0222eb07d292f391906a6edfbbbd6b93a621541924a88d7555ac6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722258883410195513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f7e7c32d958f5ddb206866fbb6c2d999,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab82d1de70b5ee8d2d451eef2c8ed531f543f76b1862fdfea6f641bb569f2f2,PodSandboxId:8100a202d42ffa00681e0ecb7865bd0058ce7778e88274136a4b150f4a157af1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722258883382951931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297cb0233f5c46930b950701
71b3c92a,},Annotations:map[string]string{io.kubernetes.container.hash: 8b10b680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250d1d4637b81b2a2d13f9f6a46aff25943cac73dae8a61937479d5715692c25,PodSandboxId:028ba35c1b91b7ba2aab4930505197b808a7b29cd0ac5c51757c8fface9ef09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722258883304701371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d840d7273a077591f90e03ca2c0a7f,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db44b873602d7c4620d664776c5496b40374f9dc74cfe789e10fbff1f7ac5b8,PodSandboxId:0b6f53f16037282fa8f8201fb79cf737398a46e3d98ae5c7ca82ec5a1f72245a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722258883318085669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df79b514a1c8b85f60110e5d2ea3c2d,},Annotations
:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb2dade0-f3ee-4948-9293-beded75fb33a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.780167480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc356fb1-63e3-49ad-8ea5-85781c364cc0 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.780251107Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc356fb1-63e3-49ad-8ea5-85781c364cc0 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.782108717Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc005f9a-7619-4a90-b541-82cde46fb346 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.782579395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258902782557330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc005f9a-7619-4a90-b541-82cde46fb346 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.783073588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48d31508-dacf-4003-8cd4-c426858c4f40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.783144467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48d31508-dacf-4003-8cd4-c426858c4f40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.783312072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d45cb52d39418ee6fb3399a06318dbbebc4fa313ac0389049f73ad495c082d75,PodSandboxId:215b0ddb6c4cf1a7b2563ad1f8a88812a09aa9b037cdfc8ce1b36968f8e4b8a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722258895720931823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qhzcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d072c450-99b9-4c62-9908-1d53ad7bedee,},Annotations:map[string]string{io.kubernetes.container.hash: 3807eee4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d809941d06b49bba35a366672f3eaead8d770a037f6430854872a263dd5608a,PodSandboxId:8b314c76d5297a9bae0ca50ebd1b1bb4e28eb8a392c66ea77edaabfb07d25f1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722258888758516257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: c4e3d0a7-9142-4407-a9db-e8b5e68ce450,},Annotations:map[string]string{io.kubernetes.container.hash: e7c5f20f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b3e0d12dd6e14bbcad5812dd2782bbf7397a366bceaba65ecc3b36beaa010c,PodSandboxId:988b7851890cb09458b891e49757879e744faa8e27a0ded65bf1f21c6509dbca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722258888603372369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58nhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded
65fe7-9ab8-4776-a45a-56bbc9137725,},Annotations:map[string]string{io.kubernetes.container.hash: 282dba8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e846b59fb2cd0227a53724825d2fc14c2de04d507b71e271ce36f600d285cee6,PodSandboxId:a47826487b0222eb07d292f391906a6edfbbbd6b93a621541924a88d7555ac6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722258883410195513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f7e7c32d958f5ddb206866fbb6c2d999,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab82d1de70b5ee8d2d451eef2c8ed531f543f76b1862fdfea6f641bb569f2f2,PodSandboxId:8100a202d42ffa00681e0ecb7865bd0058ce7778e88274136a4b150f4a157af1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722258883382951931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297cb0233f5c46930b950701
71b3c92a,},Annotations:map[string]string{io.kubernetes.container.hash: 8b10b680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250d1d4637b81b2a2d13f9f6a46aff25943cac73dae8a61937479d5715692c25,PodSandboxId:028ba35c1b91b7ba2aab4930505197b808a7b29cd0ac5c51757c8fface9ef09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722258883304701371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d840d7273a077591f90e03ca2c0a7f,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db44b873602d7c4620d664776c5496b40374f9dc74cfe789e10fbff1f7ac5b8,PodSandboxId:0b6f53f16037282fa8f8201fb79cf737398a46e3d98ae5c7ca82ec5a1f72245a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722258883318085669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df79b514a1c8b85f60110e5d2ea3c2d,},Annotations
:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48d31508-dacf-4003-8cd4-c426858c4f40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.795414896Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ec0d2568-0e25-489d-aef4-5864f9d785e4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.795608783Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:215b0ddb6c4cf1a7b2563ad1f8a88812a09aa9b037cdfc8ce1b36968f8e4b8a1,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-qhzcq,Uid:d072c450-99b9-4c62-9908-1d53ad7bedee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722258895503173815,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-qhzcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d072c450-99b9-4c62-9908-1d53ad7bedee,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:14:47.578517809Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:988b7851890cb09458b891e49757879e744faa8e27a0ded65bf1f21c6509dbca,Metadata:&PodSandboxMetadata{Name:kube-proxy-58nhz,Uid:ded65fe7-9ab8-4776-a45a-56bbc9137725,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1722258888488941078,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-58nhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded65fe7-9ab8-4776-a45a-56bbc9137725,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:14:47.578535287Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b314c76d5297a9bae0ca50ebd1b1bb4e28eb8a392c66ea77edaabfb07d25f1c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c4e3d0a7-9142-4407-a9db-e8b5e68ce450,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722258888194605454,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e3d0a7-9142-4407-a9db-e8b5
e68ce450,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T13:14:47.578537477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:028ba35c1b91b7ba2aab4930505197b808a7b29cd0ac5c51757c8fface9ef09f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-695254,Uid:29d840d
7273a077591f90e03ca2c0a7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722258883134966033,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d840d7273a077591f90e03ca2c0a7f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 29d840d7273a077591f90e03ca2c0a7f,kubernetes.io/config.seen: 2024-07-29T13:14:42.590615833Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8100a202d42ffa00681e0ecb7865bd0058ce7778e88274136a4b150f4a157af1,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-695254,Uid:297cb0233f5c46930b95070171b3c92a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722258883132253875,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
97cb0233f5c46930b95070171b3c92a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.171:2379,kubernetes.io/config.hash: 297cb0233f5c46930b95070171b3c92a,kubernetes.io/config.seen: 2024-07-29T13:14:42.591059521Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b6f53f16037282fa8f8201fb79cf737398a46e3d98ae5c7ca82ec5a1f72245a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-695254,Uid:6df79b514a1c8b85f60110e5d2ea3c2d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722258883130974646,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df79b514a1c8b85f60110e5d2ea3c2d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.171:8443,kubernetes.io/config.hash: 6df79b514a1c8b8
5f60110e5d2ea3c2d,kubernetes.io/config.seen: 2024-07-29T13:14:42.590578175Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a47826487b0222eb07d292f391906a6edfbbbd6b93a621541924a88d7555ac6a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-695254,Uid:f7e7c32d958f5ddb206866fbb6c2d999,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722258883129711546,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7e7c32d958f5ddb206866fbb6c2d999,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f7e7c32d958f5ddb206866fbb6c2d999,kubernetes.io/config.seen: 2024-07-29T13:14:42.590614679Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ec0d2568-0e25-489d-aef4-5864f9d785e4 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.796459609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b9248f9-b0ad-4ba8-aeac-0a2b04a76e9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.796577329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b9248f9-b0ad-4ba8-aeac-0a2b04a76e9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.796969346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d45cb52d39418ee6fb3399a06318dbbebc4fa313ac0389049f73ad495c082d75,PodSandboxId:215b0ddb6c4cf1a7b2563ad1f8a88812a09aa9b037cdfc8ce1b36968f8e4b8a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722258895720931823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qhzcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d072c450-99b9-4c62-9908-1d53ad7bedee,},Annotations:map[string]string{io.kubernetes.container.hash: 3807eee4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d809941d06b49bba35a366672f3eaead8d770a037f6430854872a263dd5608a,PodSandboxId:8b314c76d5297a9bae0ca50ebd1b1bb4e28eb8a392c66ea77edaabfb07d25f1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722258888758516257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: c4e3d0a7-9142-4407-a9db-e8b5e68ce450,},Annotations:map[string]string{io.kubernetes.container.hash: e7c5f20f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b3e0d12dd6e14bbcad5812dd2782bbf7397a366bceaba65ecc3b36beaa010c,PodSandboxId:988b7851890cb09458b891e49757879e744faa8e27a0ded65bf1f21c6509dbca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722258888603372369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58nhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded
65fe7-9ab8-4776-a45a-56bbc9137725,},Annotations:map[string]string{io.kubernetes.container.hash: 282dba8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e846b59fb2cd0227a53724825d2fc14c2de04d507b71e271ce36f600d285cee6,PodSandboxId:a47826487b0222eb07d292f391906a6edfbbbd6b93a621541924a88d7555ac6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722258883410195513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f7e7c32d958f5ddb206866fbb6c2d999,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab82d1de70b5ee8d2d451eef2c8ed531f543f76b1862fdfea6f641bb569f2f2,PodSandboxId:8100a202d42ffa00681e0ecb7865bd0058ce7778e88274136a4b150f4a157af1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722258883382951931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297cb0233f5c46930b950701
71b3c92a,},Annotations:map[string]string{io.kubernetes.container.hash: 8b10b680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250d1d4637b81b2a2d13f9f6a46aff25943cac73dae8a61937479d5715692c25,PodSandboxId:028ba35c1b91b7ba2aab4930505197b808a7b29cd0ac5c51757c8fface9ef09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722258883304701371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d840d7273a077591f90e03ca2c0a7f,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db44b873602d7c4620d664776c5496b40374f9dc74cfe789e10fbff1f7ac5b8,PodSandboxId:0b6f53f16037282fa8f8201fb79cf737398a46e3d98ae5c7ca82ec5a1f72245a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722258883318085669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df79b514a1c8b85f60110e5d2ea3c2d,},Annotations
:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b9248f9-b0ad-4ba8-aeac-0a2b04a76e9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.821599643Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26e6a2af-1959-44ae-83a4-89018c074ccd name=/runtime.v1.RuntimeService/Version
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.821680550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26e6a2af-1959-44ae-83a4-89018c074ccd name=/runtime.v1.RuntimeService/Version
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.823021082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f8dd202-fe54-425a-a805-7315dd32b6ef name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.823430584Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722258902823412188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f8dd202-fe54-425a-a805-7315dd32b6ef name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.823998309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=578176a9-619d-4c12-934d-ce676ed2af87 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.824064406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=578176a9-619d-4c12-934d-ce676ed2af87 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:15:02 test-preload-695254 crio[688]: time="2024-07-29 13:15:02.824227251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d45cb52d39418ee6fb3399a06318dbbebc4fa313ac0389049f73ad495c082d75,PodSandboxId:215b0ddb6c4cf1a7b2563ad1f8a88812a09aa9b037cdfc8ce1b36968f8e4b8a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722258895720931823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qhzcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d072c450-99b9-4c62-9908-1d53ad7bedee,},Annotations:map[string]string{io.kubernetes.container.hash: 3807eee4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d809941d06b49bba35a366672f3eaead8d770a037f6430854872a263dd5608a,PodSandboxId:8b314c76d5297a9bae0ca50ebd1b1bb4e28eb8a392c66ea77edaabfb07d25f1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722258888758516257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: c4e3d0a7-9142-4407-a9db-e8b5e68ce450,},Annotations:map[string]string{io.kubernetes.container.hash: e7c5f20f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b3e0d12dd6e14bbcad5812dd2782bbf7397a366bceaba65ecc3b36beaa010c,PodSandboxId:988b7851890cb09458b891e49757879e744faa8e27a0ded65bf1f21c6509dbca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722258888603372369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58nhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded
65fe7-9ab8-4776-a45a-56bbc9137725,},Annotations:map[string]string{io.kubernetes.container.hash: 282dba8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e846b59fb2cd0227a53724825d2fc14c2de04d507b71e271ce36f600d285cee6,PodSandboxId:a47826487b0222eb07d292f391906a6edfbbbd6b93a621541924a88d7555ac6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722258883410195513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f7e7c32d958f5ddb206866fbb6c2d999,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab82d1de70b5ee8d2d451eef2c8ed531f543f76b1862fdfea6f641bb569f2f2,PodSandboxId:8100a202d42ffa00681e0ecb7865bd0058ce7778e88274136a4b150f4a157af1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722258883382951931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297cb0233f5c46930b950701
71b3c92a,},Annotations:map[string]string{io.kubernetes.container.hash: 8b10b680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250d1d4637b81b2a2d13f9f6a46aff25943cac73dae8a61937479d5715692c25,PodSandboxId:028ba35c1b91b7ba2aab4930505197b808a7b29cd0ac5c51757c8fface9ef09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722258883304701371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d840d7273a077591f90e03ca2c0a7f,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db44b873602d7c4620d664776c5496b40374f9dc74cfe789e10fbff1f7ac5b8,PodSandboxId:0b6f53f16037282fa8f8201fb79cf737398a46e3d98ae5c7ca82ec5a1f72245a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722258883318085669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-695254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df79b514a1c8b85f60110e5d2ea3c2d,},Annotations
:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=578176a9-619d-4c12-934d-ce676ed2af87 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d45cb52d39418       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   215b0ddb6c4cf       coredns-6d4b75cb6d-qhzcq
	7d809941d06b4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       2                   8b314c76d5297       storage-provisioner
	d8b3e0d12dd6e       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   988b7851890cb       kube-proxy-58nhz
	e846b59fb2cd0       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   a47826487b022       kube-controller-manager-test-preload-695254
	eab82d1de70b5       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   8100a202d42ff       etcd-test-preload-695254
	3db44b873602d       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   0b6f53f160372       kube-apiserver-test-preload-695254
	250d1d4637b81       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   028ba35c1b91b       kube-scheduler-test-preload-695254
	
	
	==> coredns [d45cb52d39418ee6fb3399a06318dbbebc4fa313ac0389049f73ad495c082d75] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:51888 - 49336 "HINFO IN 8980476770089838207.880008994336354335. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.007307932s
	
	
	==> describe nodes <==
	Name:               test-preload-695254
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-695254
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=test-preload-695254
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_13_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:13:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-695254
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:14:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:14:57 +0000   Mon, 29 Jul 2024 13:13:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:14:57 +0000   Mon, 29 Jul 2024 13:13:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:14:57 +0000   Mon, 29 Jul 2024 13:13:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:14:57 +0000   Mon, 29 Jul 2024 13:14:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    test-preload-695254
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ec6c7f7ec994be4b3eca51cf4bc9c72
	  System UUID:                2ec6c7f7-ec99-4be4-b3ec-a51cf4bc9c72
	  Boot ID:                    7c70a1db-6c39-4cc0-a78a-174b202fd5e7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-qhzcq                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     101s
	  kube-system                 etcd-test-preload-695254                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         115s
	  kube-system                 kube-apiserver-test-preload-695254             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-controller-manager-test-preload-695254    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-proxy-58nhz                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-scheduler-test-preload-695254             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s (x5 over 2m2s)  kubelet          Node test-preload-695254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x4 over 2m2s)  kubelet          Node test-preload-695254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x4 over 2m2s)  kubelet          Node test-preload-695254 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node test-preload-695254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node test-preload-695254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node test-preload-695254 status is now: NodeHasSufficientPID
	  Normal  NodeReady                105s                 kubelet          Node test-preload-695254 status is now: NodeReady
	  Normal  RegisteredNode           102s                 node-controller  Node test-preload-695254 event: Registered Node test-preload-695254 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-695254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-695254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-695254 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                   node-controller  Node test-preload-695254 event: Registered Node test-preload-695254 in Controller
	
	
	==> dmesg <==
	[Jul29 13:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050044] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039151] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.753205] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.522091] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.574218] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.097569] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.066827] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061587] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.168870] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.136812] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.287422] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[ +13.131847] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.054125] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.724697] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	[  +5.576664] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.085366] systemd-fstab-generator[1756]: Ignoring "noauto" option for root device
	[  +5.476516] kauditd_printk_skb: 59 callbacks suppressed
	[Jul29 13:15] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [eab82d1de70b5ee8d2d451eef2c8ed531f543f76b1862fdfea6f641bb569f2f2] <==
	{"level":"info","ts":"2024-07-29T13:14:43.784Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"4e6b9cdcc1ed933f","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T13:14:43.796Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T13:14:43.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f switched to configuration voters=(5650782629426729791)"}
	{"level":"info","ts":"2024-07-29T13:14:43.797Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","added-peer-id":"4e6b9cdcc1ed933f","added-peer-peer-urls":["https://192.168.39.171:2380"]}
	{"level":"info","ts":"2024-07-29T13:14:43.797Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:14:43.797Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:14:43.803Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T13:14:43.806Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-07-29T13:14:43.808Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-07-29T13:14:43.809Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4e6b9cdcc1ed933f","initial-advertise-peer-urls":["https://192.168.39.171:2380"],"listen-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:14:43.811Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:14:44.750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T13:14:44.751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T13:14:44.751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgPreVoteResp from 4e6b9cdcc1ed933f at term 2"}
	{"level":"info","ts":"2024-07-29T13:14:44.751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T13:14:44.751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgVoteResp from 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-07-29T13:14:44.751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became leader at term 3"}
	{"level":"info","ts":"2024-07-29T13:14:44.751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6b9cdcc1ed933f elected leader 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-07-29T13:14:44.752Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"4e6b9cdcc1ed933f","local-member-attributes":"{Name:test-preload-695254 ClientURLs:[https://192.168.39.171:2379]}","request-path":"/0/members/4e6b9cdcc1ed933f/attributes","cluster-id":"c9ee22fca1de3e71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:14:44.752Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:14:44.756Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:14:44.756Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T13:14:44.758Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.171:2379"}
	{"level":"info","ts":"2024-07-29T13:14:44.764Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:14:44.764Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:15:03 up 0 min,  0 users,  load average: 0.57, 0.17, 0.06
	Linux test-preload-695254 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3db44b873602d7c4620d664776c5496b40374f9dc74cfe789e10fbff1f7ac5b8] <==
	I0729 13:14:47.395552       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 13:14:47.395565       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 13:14:47.425538       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0729 13:14:47.425564       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0729 13:14:47.425606       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 13:14:47.440851       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 13:14:47.482030       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 13:14:47.483989       1 cache.go:39] Caches are synced for autoregister controller
	E0729 13:14:47.491635       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0729 13:14:47.495472       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 13:14:47.512862       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 13:14:47.513659       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 13:14:47.513700       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 13:14:47.526337       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0729 13:14:47.574734       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 13:14:48.037756       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 13:14:48.390324       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 13:14:48.902945       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 13:14:48.915597       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 13:14:48.971601       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 13:14:48.994863       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 13:14:49.002626       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 13:14:49.095145       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0729 13:15:00.519637       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 13:15:00.671814       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e846b59fb2cd0227a53724825d2fc14c2de04d507b71e271ce36f600d285cee6] <==
	I0729 13:15:00.555706       1 shared_informer.go:262] Caches are synced for stateful set
	I0729 13:15:00.555974       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0729 13:15:00.557174       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0729 13:15:00.559487       1 shared_informer.go:262] Caches are synced for taint
	I0729 13:15:00.559677       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0729 13:15:00.559854       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-695254. Assuming now as a timestamp.
	I0729 13:15:00.559940       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0729 13:15:00.560047       1 shared_informer.go:262] Caches are synced for daemon sets
	I0729 13:15:00.560147       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 13:15:00.563039       1 event.go:294] "Event occurred" object="test-preload-695254" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-695254 event: Registered Node test-preload-695254 in Controller"
	I0729 13:15:00.568542       1 shared_informer.go:262] Caches are synced for persistent volume
	I0729 13:15:00.570936       1 shared_informer.go:262] Caches are synced for ephemeral
	I0729 13:15:00.591655       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 13:15:00.605666       1 shared_informer.go:262] Caches are synced for crt configmap
	I0729 13:15:00.633107       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0729 13:15:00.659004       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0729 13:15:00.703197       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0729 13:15:00.708107       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0729 13:15:00.742871       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 13:15:00.766247       1 shared_informer.go:262] Caches are synced for cronjob
	I0729 13:15:00.771740       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 13:15:00.781746       1 shared_informer.go:262] Caches are synced for job
	I0729 13:15:01.176494       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 13:15:01.178854       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 13:15:01.178923       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [d8b3e0d12dd6e14bbcad5812dd2782bbf7397a366bceaba65ecc3b36beaa010c] <==
	I0729 13:14:49.040132       1 node.go:163] Successfully retrieved node IP: 192.168.39.171
	I0729 13:14:49.040900       1 server_others.go:138] "Detected node IP" address="192.168.39.171"
	I0729 13:14:49.041143       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 13:14:49.089564       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 13:14:49.089581       1 server_others.go:206] "Using iptables Proxier"
	I0729 13:14:49.089618       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 13:14:49.089961       1 server.go:661] "Version info" version="v1.24.4"
	I0729 13:14:49.089985       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:14:49.091146       1 config.go:317] "Starting service config controller"
	I0729 13:14:49.091304       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 13:14:49.091366       1 config.go:226] "Starting endpoint slice config controller"
	I0729 13:14:49.091386       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 13:14:49.092357       1 config.go:444] "Starting node config controller"
	I0729 13:14:49.092384       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 13:14:49.192885       1 shared_informer.go:262] Caches are synced for node config
	I0729 13:14:49.192985       1 shared_informer.go:262] Caches are synced for service config
	I0729 13:14:49.193022       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [250d1d4637b81b2a2d13f9f6a46aff25943cac73dae8a61937479d5715692c25] <==
	I0729 13:14:43.915603       1 serving.go:348] Generated self-signed cert in-memory
	I0729 13:14:47.522954       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0729 13:14:47.524519       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:14:47.530869       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0729 13:14:47.531233       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0729 13:14:47.531446       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 13:14:47.531470       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 13:14:47.531554       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0729 13:14:47.531576       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0729 13:14:47.532556       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0729 13:14:47.532860       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 13:14:47.632442       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0729 13:14:47.632523       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0729 13:14:47.632618       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: I0729 13:14:47.648169    1076 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ded65fe7-9ab8-4776-a45a-56bbc9137725-lib-modules\") pod \"kube-proxy-58nhz\" (UID: \"ded65fe7-9ab8-4776-a45a-56bbc9137725\") " pod="kube-system/kube-proxy-58nhz"
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: I0729 13:14:47.648531    1076 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln69l\" (UniqueName: \"kubernetes.io/projected/c4e3d0a7-9142-4407-a9db-e8b5e68ce450-kube-api-access-ln69l\") pod \"storage-provisioner\" (UID: \"c4e3d0a7-9142-4407-a9db-e8b5e68ce450\") " pod="kube-system/storage-provisioner"
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: I0729 13:14:47.648581    1076 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d072c450-99b9-4c62-9908-1d53ad7bedee-config-volume\") pod \"coredns-6d4b75cb6d-qhzcq\" (UID: \"d072c450-99b9-4c62-9908-1d53ad7bedee\") " pod="kube-system/coredns-6d4b75cb6d-qhzcq"
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: I0729 13:14:47.648620    1076 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lvts\" (UniqueName: \"kubernetes.io/projected/ded65fe7-9ab8-4776-a45a-56bbc9137725-kube-api-access-6lvts\") pod \"kube-proxy-58nhz\" (UID: \"ded65fe7-9ab8-4776-a45a-56bbc9137725\") " pod="kube-system/kube-proxy-58nhz"
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: I0729 13:14:47.648656    1076 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm24w\" (UniqueName: \"kubernetes.io/projected/d072c450-99b9-4c62-9908-1d53ad7bedee-kube-api-access-tm24w\") pod \"coredns-6d4b75cb6d-qhzcq\" (UID: \"d072c450-99b9-4c62-9908-1d53ad7bedee\") " pod="kube-system/coredns-6d4b75cb6d-qhzcq"
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: I0729 13:14:47.648676    1076 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ded65fe7-9ab8-4776-a45a-56bbc9137725-xtables-lock\") pod \"kube-proxy-58nhz\" (UID: \"ded65fe7-9ab8-4776-a45a-56bbc9137725\") " pod="kube-system/kube-proxy-58nhz"
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: I0729 13:14:47.648693    1076 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c4e3d0a7-9142-4407-a9db-e8b5e68ce450-tmp\") pod \"storage-provisioner\" (UID: \"c4e3d0a7-9142-4407-a9db-e8b5e68ce450\") " pod="kube-system/storage-provisioner"
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: I0729 13:14:47.648709    1076 reconciler.go:159] "Reconciler: start to sync state"
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: E0729 13:14:47.663434    1076 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: E0729 13:14:47.752207    1076 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 13:14:47 test-preload-695254 kubelet[1076]: E0729 13:14:47.752294    1076 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d072c450-99b9-4c62-9908-1d53ad7bedee-config-volume podName:d072c450-99b9-4c62-9908-1d53ad7bedee nodeName:}" failed. No retries permitted until 2024-07-29 13:14:48.252269862 +0000 UTC m=+5.794007578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d072c450-99b9-4c62-9908-1d53ad7bedee-config-volume") pod "coredns-6d4b75cb6d-qhzcq" (UID: "d072c450-99b9-4c62-9908-1d53ad7bedee") : object "kube-system"/"coredns" not registered
	Jul 29 13:14:48 test-preload-695254 kubelet[1076]: E0729 13:14:48.254725    1076 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 13:14:48 test-preload-695254 kubelet[1076]: E0729 13:14:48.254829    1076 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d072c450-99b9-4c62-9908-1d53ad7bedee-config-volume podName:d072c450-99b9-4c62-9908-1d53ad7bedee nodeName:}" failed. No retries permitted until 2024-07-29 13:14:49.25481393 +0000 UTC m=+6.796551647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d072c450-99b9-4c62-9908-1d53ad7bedee-config-volume") pod "coredns-6d4b75cb6d-qhzcq" (UID: "d072c450-99b9-4c62-9908-1d53ad7bedee") : object "kube-system"/"coredns" not registered
	Jul 29 13:14:48 test-preload-695254 kubelet[1076]: I0729 13:14:48.736954    1076 scope.go:110] "RemoveContainer" containerID="e23ab1ebae7b42141bfe413f4c5e403ec9547491590fa7910505d884c88a7d1b"
	Jul 29 13:14:49 test-preload-695254 kubelet[1076]: E0729 13:14:49.266050    1076 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 13:14:49 test-preload-695254 kubelet[1076]: E0729 13:14:49.266127    1076 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d072c450-99b9-4c62-9908-1d53ad7bedee-config-volume podName:d072c450-99b9-4c62-9908-1d53ad7bedee nodeName:}" failed. No retries permitted until 2024-07-29 13:14:51.266111327 +0000 UTC m=+8.807849043 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d072c450-99b9-4c62-9908-1d53ad7bedee-config-volume") pod "coredns-6d4b75cb6d-qhzcq" (UID: "d072c450-99b9-4c62-9908-1d53ad7bedee") : object "kube-system"/"coredns" not registered
	Jul 29 13:14:49 test-preload-695254 kubelet[1076]: E0729 13:14:49.694481    1076 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-qhzcq" podUID=d072c450-99b9-4c62-9908-1d53ad7bedee
	Jul 29 13:14:49 test-preload-695254 kubelet[1076]: I0729 13:14:49.751399    1076 scope.go:110] "RemoveContainer" containerID="7d809941d06b49bba35a366672f3eaead8d770a037f6430854872a263dd5608a"
	Jul 29 13:14:49 test-preload-695254 kubelet[1076]: E0729 13:14:49.751554    1076 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c4e3d0a7-9142-4407-a9db-e8b5e68ce450)\"" pod="kube-system/storage-provisioner" podUID=c4e3d0a7-9142-4407-a9db-e8b5e68ce450
	Jul 29 13:14:49 test-preload-695254 kubelet[1076]: I0729 13:14:49.751613    1076 scope.go:110] "RemoveContainer" containerID="e23ab1ebae7b42141bfe413f4c5e403ec9547491590fa7910505d884c88a7d1b"
	Jul 29 13:14:50 test-preload-695254 kubelet[1076]: I0729 13:14:50.756387    1076 scope.go:110] "RemoveContainer" containerID="7d809941d06b49bba35a366672f3eaead8d770a037f6430854872a263dd5608a"
	Jul 29 13:14:50 test-preload-695254 kubelet[1076]: E0729 13:14:50.756881    1076 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c4e3d0a7-9142-4407-a9db-e8b5e68ce450)\"" pod="kube-system/storage-provisioner" podUID=c4e3d0a7-9142-4407-a9db-e8b5e68ce450
	Jul 29 13:14:51 test-preload-695254 kubelet[1076]: E0729 13:14:51.282057    1076 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 13:14:51 test-preload-695254 kubelet[1076]: E0729 13:14:51.282228    1076 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d072c450-99b9-4c62-9908-1d53ad7bedee-config-volume podName:d072c450-99b9-4c62-9908-1d53ad7bedee nodeName:}" failed. No retries permitted until 2024-07-29 13:14:55.282200598 +0000 UTC m=+12.823938302 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d072c450-99b9-4c62-9908-1d53ad7bedee-config-volume") pod "coredns-6d4b75cb6d-qhzcq" (UID: "d072c450-99b9-4c62-9908-1d53ad7bedee") : object "kube-system"/"coredns" not registered
	Jul 29 13:14:51 test-preload-695254 kubelet[1076]: E0729 13:14:51.695820    1076 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-qhzcq" podUID=d072c450-99b9-4c62-9908-1d53ad7bedee
	
	
	==> storage-provisioner [7d809941d06b49bba35a366672f3eaead8d770a037f6430854872a263dd5608a] <==
	I0729 13:14:48.917302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 13:14:48.924043       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-695254 -n test-preload-695254
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-695254 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-695254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-695254
--- FAIL: TestPreload (312.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (445.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-375555 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-375555 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m10.344201056s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-375555] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-375555" primary control-plane node in "kubernetes-upgrade-375555" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:18:00.277407  280719 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:18:00.277640  280719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:18:00.277648  280719 out.go:304] Setting ErrFile to fd 2...
	I0729 13:18:00.277652  280719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:18:00.277820  280719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:18:00.278368  280719 out.go:298] Setting JSON to false
	I0729 13:18:00.279269  280719 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10823,"bootTime":1722248257,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:18:00.279326  280719 start.go:139] virtualization: kvm guest
	I0729 13:18:00.281904  280719 out.go:177] * [kubernetes-upgrade-375555] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:18:00.283602  280719 notify.go:220] Checking for updates...
	I0729 13:18:00.283619  280719 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:18:00.285171  280719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:18:00.286661  280719 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:18:00.288199  280719 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:18:00.289623  280719 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:18:00.291171  280719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:18:00.292821  280719 config.go:182] Loaded profile config "NoKubernetes-225538": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:18:00.292929  280719 config.go:182] Loaded profile config "force-systemd-env-265470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:18:00.293016  280719 config.go:182] Loaded profile config "running-upgrade-614412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0729 13:18:00.293109  280719 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:18:00.328848  280719 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:18:00.330495  280719 start.go:297] selected driver: kvm2
	I0729 13:18:00.330510  280719 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:18:00.330521  280719 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:18:00.331258  280719 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:18:00.331354  280719 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:18:00.347253  280719 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:18:00.347316  280719 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 13:18:00.347597  280719 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 13:18:00.347681  280719 cni.go:84] Creating CNI manager for ""
	I0729 13:18:00.347700  280719 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:18:00.347714  280719 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 13:18:00.347798  280719 start.go:340] cluster config:
	{Name:kubernetes-upgrade-375555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-375555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:18:00.347974  280719 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:18:00.350034  280719 out.go:177] * Starting "kubernetes-upgrade-375555" primary control-plane node in "kubernetes-upgrade-375555" cluster
	I0729 13:18:00.351500  280719 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:18:00.351545  280719 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:18:00.351556  280719 cache.go:56] Caching tarball of preloaded images
	I0729 13:18:00.351685  280719 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:18:00.351699  280719 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 13:18:00.351830  280719 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/config.json ...
	I0729 13:18:00.351851  280719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/config.json: {Name:mk76227086bc1aeea1624809e61463932ac45d5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:18:00.351981  280719 start.go:360] acquireMachinesLock for kubernetes-upgrade-375555: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:18:39.857270  280719 start.go:364] duration metric: took 39.505250124s to acquireMachinesLock for "kubernetes-upgrade-375555"
	I0729 13:18:39.857360  280719 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-375555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-375555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:18:39.857461  280719 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 13:18:39.859493  280719 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 13:18:39.859797  280719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:18:39.859844  280719 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:18:39.876600  280719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0729 13:18:39.877068  280719 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:18:39.877697  280719 main.go:141] libmachine: Using API Version  1
	I0729 13:18:39.877723  280719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:18:39.878106  280719 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:18:39.878308  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetMachineName
	I0729 13:18:39.878501  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .DriverName
	I0729 13:18:39.878674  280719 start.go:159] libmachine.API.Create for "kubernetes-upgrade-375555" (driver="kvm2")
	I0729 13:18:39.878705  280719 client.go:168] LocalClient.Create starting
	I0729 13:18:39.878740  280719 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem
	I0729 13:18:39.878782  280719 main.go:141] libmachine: Decoding PEM data...
	I0729 13:18:39.878810  280719 main.go:141] libmachine: Parsing certificate...
	I0729 13:18:39.878888  280719 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem
	I0729 13:18:39.878909  280719 main.go:141] libmachine: Decoding PEM data...
	I0729 13:18:39.878919  280719 main.go:141] libmachine: Parsing certificate...
	I0729 13:18:39.878940  280719 main.go:141] libmachine: Running pre-create checks...
	I0729 13:18:39.878952  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .PreCreateCheck
	I0729 13:18:39.879371  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetConfigRaw
	I0729 13:18:39.879796  280719 main.go:141] libmachine: Creating machine...
	I0729 13:18:39.879824  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .Create
	I0729 13:18:39.879992  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Creating KVM machine...
	I0729 13:18:39.881151  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found existing default KVM network
	I0729 13:18:39.882387  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:39.882219  281355 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:dd:ac:ea} reservation:<nil>}
	I0729 13:18:39.883354  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:39.883267  281355 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002867e0}
	I0729 13:18:39.883379  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | created network xml: 
	I0729 13:18:39.883389  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | <network>
	I0729 13:18:39.883404  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG |   <name>mk-kubernetes-upgrade-375555</name>
	I0729 13:18:39.883416  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG |   <dns enable='no'/>
	I0729 13:18:39.883427  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG |   
	I0729 13:18:39.883438  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0729 13:18:39.883449  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG |     <dhcp>
	I0729 13:18:39.883459  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0729 13:18:39.883480  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG |     </dhcp>
	I0729 13:18:39.883493  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG |   </ip>
	I0729 13:18:39.883501  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG |   
	I0729 13:18:39.883510  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | </network>
	I0729 13:18:39.883517  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | 
	I0729 13:18:39.888817  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | trying to create private KVM network mk-kubernetes-upgrade-375555 192.168.50.0/24...
	I0729 13:18:39.961218  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | private KVM network mk-kubernetes-upgrade-375555 192.168.50.0/24 created
	I0729 13:18:39.961254  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:39.961165  281355 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:18:39.961266  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Setting up store path in /home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555 ...
	I0729 13:18:39.961283  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Building disk image from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:18:39.961388  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Downloading /home/jenkins/minikube-integration/19341-233093/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:18:40.214333  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:40.214192  281355 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/id_rsa...
	I0729 13:18:40.754318  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:40.754143  281355 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/kubernetes-upgrade-375555.rawdisk...
	I0729 13:18:40.754355  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Writing magic tar header
	I0729 13:18:40.754414  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Writing SSH key tar header
	I0729 13:18:40.754454  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:40.754259  281355 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555 ...
	I0729 13:18:40.754476  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555 (perms=drwx------)
	I0729 13:18:40.754497  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:18:40.754516  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube (perms=drwxr-xr-x)
	I0729 13:18:40.754538  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555
	I0729 13:18:40.754556  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines
	I0729 13:18:40.754570  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093 (perms=drwxrwxr-x)
	I0729 13:18:40.754585  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:18:40.754594  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:18:40.754614  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Creating domain...
	I0729 13:18:40.754629  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:18:40.754643  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093
	I0729 13:18:40.754655  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:18:40.754666  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:18:40.754676  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Checking permissions on dir: /home
	I0729 13:18:40.754691  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Skipping /home - not owner
	I0729 13:18:40.755804  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) define libvirt domain using xml: 
	I0729 13:18:40.755826  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) <domain type='kvm'>
	I0729 13:18:40.755834  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   <name>kubernetes-upgrade-375555</name>
	I0729 13:18:40.755843  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   <memory unit='MiB'>2200</memory>
	I0729 13:18:40.755849  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   <vcpu>2</vcpu>
	I0729 13:18:40.755853  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   <features>
	I0729 13:18:40.755875  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <acpi/>
	I0729 13:18:40.755885  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <apic/>
	I0729 13:18:40.755895  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <pae/>
	I0729 13:18:40.755905  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     
	I0729 13:18:40.755935  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   </features>
	I0729 13:18:40.755958  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   <cpu mode='host-passthrough'>
	I0729 13:18:40.755968  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   
	I0729 13:18:40.755979  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   </cpu>
	I0729 13:18:40.755988  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   <os>
	I0729 13:18:40.755995  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <type>hvm</type>
	I0729 13:18:40.756005  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <boot dev='cdrom'/>
	I0729 13:18:40.756012  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <boot dev='hd'/>
	I0729 13:18:40.756019  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <bootmenu enable='no'/>
	I0729 13:18:40.756026  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   </os>
	I0729 13:18:40.756031  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   <devices>
	I0729 13:18:40.756044  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <disk type='file' device='cdrom'>
	I0729 13:18:40.756062  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/boot2docker.iso'/>
	I0729 13:18:40.756074  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <target dev='hdc' bus='scsi'/>
	I0729 13:18:40.756085  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <readonly/>
	I0729 13:18:40.756093  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     </disk>
	I0729 13:18:40.756102  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <disk type='file' device='disk'>
	I0729 13:18:40.756128  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:18:40.756147  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/kubernetes-upgrade-375555.rawdisk'/>
	I0729 13:18:40.756158  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <target dev='hda' bus='virtio'/>
	I0729 13:18:40.756167  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     </disk>
	I0729 13:18:40.756178  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <interface type='network'>
	I0729 13:18:40.756221  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <source network='mk-kubernetes-upgrade-375555'/>
	I0729 13:18:40.756245  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <model type='virtio'/>
	I0729 13:18:40.756261  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     </interface>
	I0729 13:18:40.756279  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <interface type='network'>
	I0729 13:18:40.756287  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <source network='default'/>
	I0729 13:18:40.756292  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <model type='virtio'/>
	I0729 13:18:40.756300  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     </interface>
	I0729 13:18:40.756306  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <serial type='pty'>
	I0729 13:18:40.756312  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <target port='0'/>
	I0729 13:18:40.756324  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     </serial>
	I0729 13:18:40.756332  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <console type='pty'>
	I0729 13:18:40.756337  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <target type='serial' port='0'/>
	I0729 13:18:40.756351  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     </console>
	I0729 13:18:40.756365  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     <rng model='virtio'>
	I0729 13:18:40.756375  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)       <backend model='random'>/dev/random</backend>
	I0729 13:18:40.756389  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     </rng>
	I0729 13:18:40.756401  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     
	I0729 13:18:40.756410  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)     
	I0729 13:18:40.756419  280719 main.go:141] libmachine: (kubernetes-upgrade-375555)   </devices>
	I0729 13:18:40.756437  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) </domain>
	I0729 13:18:40.756449  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) 
	I0729 13:18:40.760820  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:cd:90:23 in network default
	I0729 13:18:40.761523  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Ensuring networks are active...
	I0729 13:18:40.761544  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:40.762255  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Ensuring network default is active
	I0729 13:18:40.762573  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Ensuring network mk-kubernetes-upgrade-375555 is active
	I0729 13:18:40.763178  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Getting domain xml...
	I0729 13:18:40.764017  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Creating domain...
	I0729 13:18:41.954214  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Waiting to get IP...
	I0729 13:18:41.954952  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:41.955402  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:41.955454  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:41.955377  281355 retry.go:31] will retry after 246.626575ms: waiting for machine to come up
	I0729 13:18:42.203852  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:42.204493  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:42.204523  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:42.204434  281355 retry.go:31] will retry after 374.763182ms: waiting for machine to come up
	I0729 13:18:42.581027  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:42.581535  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:42.581567  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:42.581478  281355 retry.go:31] will retry after 329.397975ms: waiting for machine to come up
	I0729 13:18:42.911994  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:42.912555  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:42.912580  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:42.912501  281355 retry.go:31] will retry after 598.10045ms: waiting for machine to come up
	I0729 13:18:43.512324  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:43.512825  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:43.512857  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:43.512755  281355 retry.go:31] will retry after 597.651504ms: waiting for machine to come up
	I0729 13:18:44.111763  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:44.112271  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:44.112305  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:44.112219  281355 retry.go:31] will retry after 587.891025ms: waiting for machine to come up
	I0729 13:18:44.702096  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:44.702780  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:44.702811  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:44.702708  281355 retry.go:31] will retry after 1.11380747s: waiting for machine to come up
	I0729 13:18:45.818096  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:45.818739  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:45.818765  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:45.818639  281355 retry.go:31] will retry after 1.152338202s: waiting for machine to come up
	I0729 13:18:46.972357  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:46.972925  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:46.972952  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:46.972862  281355 retry.go:31] will retry after 1.783905983s: waiting for machine to come up
	I0729 13:18:48.758071  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:48.758584  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:48.758646  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:48.758553  281355 retry.go:31] will retry after 2.227593934s: waiting for machine to come up
	I0729 13:18:50.987520  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:50.988081  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:50.988113  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:50.988017  281355 retry.go:31] will retry after 2.004882746s: waiting for machine to come up
	I0729 13:18:52.994282  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:52.994645  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:52.994669  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:52.994615  281355 retry.go:31] will retry after 2.891727082s: waiting for machine to come up
	I0729 13:18:55.887872  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:55.888334  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:55.888366  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:55.888260  281355 retry.go:31] will retry after 3.245308927s: waiting for machine to come up
	I0729 13:18:59.137665  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:18:59.138257  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find current IP address of domain kubernetes-upgrade-375555 in network mk-kubernetes-upgrade-375555
	I0729 13:18:59.138287  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | I0729 13:18:59.138217  281355 retry.go:31] will retry after 4.066455082s: waiting for machine to come up
	I0729 13:19:03.208374  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.208950  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Found IP for machine: 192.168.50.118
	I0729 13:19:03.208985  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has current primary IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.209016  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Reserving static IP address...
	I0729 13:19:03.209402  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-375555", mac: "52:54:00:d2:ac:80", ip: "192.168.50.118"} in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.283813  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Getting to WaitForSSH function...
	I0729 13:19:03.283849  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Reserved static IP address: 192.168.50.118
	I0729 13:19:03.283865  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Waiting for SSH to be available...
	I0729 13:19:03.286695  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.287134  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:03.287159  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.287241  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Using SSH client type: external
	I0729 13:19:03.287321  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/id_rsa (-rw-------)
	I0729 13:19:03.287363  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:19:03.287382  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | About to run SSH command:
	I0729 13:19:03.287392  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | exit 0
	I0729 13:19:03.421316  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | SSH cmd err, output: <nil>: 
	I0729 13:19:03.421589  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) KVM machine creation complete!
	I0729 13:19:03.422025  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetConfigRaw
	I0729 13:19:03.422637  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .DriverName
	I0729 13:19:03.422832  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .DriverName
	I0729 13:19:03.423015  280719 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:19:03.423048  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetState
	I0729 13:19:03.424623  280719 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:19:03.424641  280719 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:19:03.424649  280719 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:19:03.424678  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:03.427537  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.428013  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:03.428043  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.428261  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:19:03.428460  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:03.428598  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:03.428732  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:19:03.428973  280719 main.go:141] libmachine: Using SSH client type: native
	I0729 13:19:03.429220  280719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.118 22 <nil> <nil>}
	I0729 13:19:03.429233  280719 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:19:03.540525  280719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:19:03.540561  280719 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:19:03.540573  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:03.543581  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.543999  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:03.544042  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.544181  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:19:03.544415  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:03.544595  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:03.544772  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:19:03.544966  280719 main.go:141] libmachine: Using SSH client type: native
	I0729 13:19:03.545203  280719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.118 22 <nil> <nil>}
	I0729 13:19:03.545218  280719 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:19:03.658097  280719 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:19:03.658253  280719 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:19:03.658269  280719 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:19:03.658281  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetMachineName
	I0729 13:19:03.658550  280719 buildroot.go:166] provisioning hostname "kubernetes-upgrade-375555"
	I0729 13:19:03.658585  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetMachineName
	I0729 13:19:03.658761  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:03.661585  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.662027  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:03.662059  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.662244  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:19:03.662460  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:03.662618  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:03.662747  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:19:03.662948  280719 main.go:141] libmachine: Using SSH client type: native
	I0729 13:19:03.663161  280719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.118 22 <nil> <nil>}
	I0729 13:19:03.663177  280719 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-375555 && echo "kubernetes-upgrade-375555" | sudo tee /etc/hostname
	I0729 13:19:03.792596  280719 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-375555
	
	I0729 13:19:03.792707  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:03.796131  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.796561  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:03.796634  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.796773  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:19:03.797000  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:03.797218  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:03.797412  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:19:03.797638  280719 main.go:141] libmachine: Using SSH client type: native
	I0729 13:19:03.797886  280719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.118 22 <nil> <nil>}
	I0729 13:19:03.797915  280719 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-375555' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-375555/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-375555' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:19:03.922369  280719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:19:03.922412  280719 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:19:03.922439  280719 buildroot.go:174] setting up certificates
	I0729 13:19:03.922450  280719 provision.go:84] configureAuth start
	I0729 13:19:03.922459  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetMachineName
	I0729 13:19:03.922787  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetIP
	I0729 13:19:03.925492  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.925925  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:03.925960  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.926090  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:03.928413  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.928810  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:03.928851  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:03.928992  280719 provision.go:143] copyHostCerts
	I0729 13:19:03.929052  280719 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:19:03.929062  280719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:19:03.929119  280719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:19:03.929201  280719 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:19:03.929210  280719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:19:03.929229  280719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:19:03.929274  280719 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:19:03.929281  280719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:19:03.929297  280719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:19:03.929337  280719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-375555 san=[127.0.0.1 192.168.50.118 kubernetes-upgrade-375555 localhost minikube]
	I0729 13:19:04.061448  280719 provision.go:177] copyRemoteCerts
	I0729 13:19:04.061510  280719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:19:04.061536  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:04.063915  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.064271  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:04.064326  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.064509  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:19:04.064700  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:04.064867  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:19:04.064986  280719 sshutil.go:53] new ssh client: &{IP:192.168.50.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/id_rsa Username:docker}
	I0729 13:19:04.151203  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:19:04.176426  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 13:19:04.199753  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:19:04.224151  280719 provision.go:87] duration metric: took 301.688383ms to configureAuth
	I0729 13:19:04.224187  280719 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:19:04.224389  280719 config.go:182] Loaded profile config "kubernetes-upgrade-375555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:19:04.224482  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:04.227338  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.227710  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:04.227743  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.227995  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:19:04.228193  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:04.228352  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:04.228469  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:19:04.228630  280719 main.go:141] libmachine: Using SSH client type: native
	I0729 13:19:04.228845  280719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.118 22 <nil> <nil>}
	I0729 13:19:04.228867  280719 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:19:04.491675  280719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:19:04.491704  280719 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:19:04.491713  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetURL
	I0729 13:19:04.493177  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Using libvirt version 6000000
	I0729 13:19:04.496202  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.496640  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:04.496669  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.496895  280719 main.go:141] libmachine: Docker is up and running!
	I0729 13:19:04.496915  280719 main.go:141] libmachine: Reticulating splines...
	I0729 13:19:04.496924  280719 client.go:171] duration metric: took 24.618207247s to LocalClient.Create
	I0729 13:19:04.496952  280719 start.go:167] duration metric: took 24.618278921s to libmachine.API.Create "kubernetes-upgrade-375555"
	I0729 13:19:04.496965  280719 start.go:293] postStartSetup for "kubernetes-upgrade-375555" (driver="kvm2")
	I0729 13:19:04.496979  280719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:19:04.497002  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .DriverName
	I0729 13:19:04.497270  280719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:19:04.497301  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:04.499580  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.500024  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:04.500052  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.500176  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:19:04.500366  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:04.500517  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:19:04.500649  280719 sshutil.go:53] new ssh client: &{IP:192.168.50.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/id_rsa Username:docker}
	I0729 13:19:04.587582  280719 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:19:04.593393  280719 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:19:04.593419  280719 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:19:04.593493  280719 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:19:04.593580  280719 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:19:04.593687  280719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:19:04.607200  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:19:04.633844  280719 start.go:296] duration metric: took 136.860435ms for postStartSetup
	I0729 13:19:04.633903  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetConfigRaw
	I0729 13:19:04.634663  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetIP
	I0729 13:19:04.637830  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.638311  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:04.638338  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.638587  280719 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/config.json ...
	I0729 13:19:04.638813  280719 start.go:128] duration metric: took 24.781337744s to createHost
	I0729 13:19:04.638846  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:04.641228  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.641587  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:04.641618  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.641756  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:19:04.641974  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:04.642155  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:04.642321  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:19:04.642509  280719 main.go:141] libmachine: Using SSH client type: native
	I0729 13:19:04.642712  280719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.118 22 <nil> <nil>}
	I0729 13:19:04.642726  280719 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 13:19:04.761923  280719 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259144.707102973
	
	I0729 13:19:04.761953  280719 fix.go:216] guest clock: 1722259144.707102973
	I0729 13:19:04.761964  280719 fix.go:229] Guest: 2024-07-29 13:19:04.707102973 +0000 UTC Remote: 2024-07-29 13:19:04.638830642 +0000 UTC m=+64.407300071 (delta=68.272331ms)
	I0729 13:19:04.762000  280719 fix.go:200] guest clock delta is within tolerance: 68.272331ms
	I0729 13:19:04.762006  280719 start.go:83] releasing machines lock for "kubernetes-upgrade-375555", held for 24.904692889s
	I0729 13:19:04.762039  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .DriverName
	I0729 13:19:04.762348  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetIP
	I0729 13:19:04.765172  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.765512  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:04.765544  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.765705  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .DriverName
	I0729 13:19:04.766213  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .DriverName
	I0729 13:19:04.766410  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .DriverName
	I0729 13:19:04.766526  280719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:19:04.766579  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:04.766638  280719 ssh_runner.go:195] Run: cat /version.json
	I0729 13:19:04.766660  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:19:04.769223  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.769473  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.769520  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:04.769556  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.769806  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:19:04.769942  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:04.769973  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:04.769982  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:04.770146  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:19:04.770216  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:19:04.770331  280719 sshutil.go:53] new ssh client: &{IP:192.168.50.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/id_rsa Username:docker}
	I0729 13:19:04.770372  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:19:04.770487  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:19:04.770618  280719 sshutil.go:53] new ssh client: &{IP:192.168.50.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/id_rsa Username:docker}
	I0729 13:19:04.858745  280719 ssh_runner.go:195] Run: systemctl --version
	I0729 13:19:04.885703  280719 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:19:05.052813  280719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:19:05.059669  280719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:19:05.059751  280719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:19:05.076507  280719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:19:05.076535  280719 start.go:495] detecting cgroup driver to use...
	I0729 13:19:05.076612  280719 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:19:05.098020  280719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:19:05.114398  280719 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:19:05.114498  280719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:19:05.128620  280719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:19:05.143264  280719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:19:05.274763  280719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:19:05.462777  280719 docker.go:233] disabling docker service ...
	I0729 13:19:05.462860  280719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:19:05.483639  280719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:19:05.499904  280719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:19:05.661808  280719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:19:05.831493  280719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:19:05.848042  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:19:05.870816  280719 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 13:19:05.870887  280719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:19:05.884926  280719 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:19:05.885014  280719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:19:05.897414  280719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:19:05.908977  280719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:19:05.919626  280719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:19:05.930608  280719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:19:05.940491  280719 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:19:05.940558  280719 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:19:05.955470  280719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:19:05.965446  280719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:19:06.110385  280719 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:19:06.260849  280719 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:19:06.260936  280719 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:19:06.266818  280719 start.go:563] Will wait 60s for crictl version
	I0729 13:19:06.266887  280719 ssh_runner.go:195] Run: which crictl
	I0729 13:19:06.270636  280719 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:19:06.316020  280719 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:19:06.316138  280719 ssh_runner.go:195] Run: crio --version
	I0729 13:19:06.348552  280719 ssh_runner.go:195] Run: crio --version
	I0729 13:19:06.384257  280719 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 13:19:06.385689  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetIP
	I0729 13:19:06.389286  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:06.389823  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:18:55 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:19:06.389857  280719 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:19:06.390138  280719 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 13:19:06.394617  280719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:19:06.407872  280719 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-375555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-375555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.118 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:19:06.408070  280719 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:19:06.408140  280719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:19:06.442137  280719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:19:06.442201  280719 ssh_runner.go:195] Run: which lz4
	I0729 13:19:06.446555  280719 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 13:19:06.451088  280719 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:19:06.451125  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 13:19:08.202166  280719 crio.go:462] duration metric: took 1.755636096s to copy over tarball
	I0729 13:19:08.202267  280719 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:19:10.860843  280719 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.658525347s)
	I0729 13:19:10.860881  280719 crio.go:469] duration metric: took 2.658670756s to extract the tarball
	I0729 13:19:10.860891  280719 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:19:10.904581  280719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:19:10.953825  280719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:19:10.953853  280719 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:19:10.953926  280719 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:19:10.953964  280719 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:19:10.953973  280719 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:19:10.953929  280719 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:19:10.954052  280719 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:19:10.953980  280719 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 13:19:10.954004  280719 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 13:19:10.954008  280719 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:19:10.955747  280719 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:19:10.955753  280719 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:19:10.955748  280719 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:19:10.955748  280719 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:19:10.955793  280719 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:19:10.955878  280719 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 13:19:10.955897  280719 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:19:10.956127  280719 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 13:19:11.168427  280719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 13:19:11.216374  280719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 13:19:11.216567  280719 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 13:19:11.216618  280719 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 13:19:11.216662  280719 ssh_runner.go:195] Run: which crictl
	I0729 13:19:11.260757  280719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:19:11.263965  280719 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 13:19:11.264017  280719 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 13:19:11.264067  280719 ssh_runner.go:195] Run: which crictl
	I0729 13:19:11.264079  280719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 13:19:11.320331  280719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:19:11.326282  280719 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 13:19:11.326335  280719 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:19:11.326349  280719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 13:19:11.326377  280719 ssh_runner.go:195] Run: which crictl
	I0729 13:19:11.326394  280719 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 13:19:11.334906  280719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:19:11.334906  280719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:19:11.396544  280719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 13:19:11.421013  280719 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 13:19:11.421068  280719 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:19:11.421100  280719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:19:11.421117  280719 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 13:19:11.421109  280719 ssh_runner.go:195] Run: which crictl
	I0729 13:19:11.423103  280719 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 13:19:11.423136  280719 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:19:11.423174  280719 ssh_runner.go:195] Run: which crictl
	I0729 13:19:11.454112  280719 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 13:19:11.454167  280719 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:19:11.454239  280719 ssh_runner.go:195] Run: which crictl
	I0729 13:19:11.493671  280719 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 13:19:11.493731  280719 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:19:11.493744  280719 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 13:19:11.493799  280719 ssh_runner.go:195] Run: which crictl
	I0729 13:19:11.493836  280719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:19:11.493874  280719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:19:11.493844  280719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:19:11.569244  280719 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 13:19:11.575994  280719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 13:19:11.576034  280719 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 13:19:11.576089  280719 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 13:19:11.610096  280719 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 13:19:13.089002  280719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:19:13.232726  280719 cache_images.go:92] duration metric: took 2.278850811s to LoadCachedImages
	W0729 13:19:13.232856  280719 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 13:19:13.232877  280719 kubeadm.go:934] updating node { 192.168.50.118 8443 v1.20.0 crio true true} ...
	I0729 13:19:13.233037  280719 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-375555 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-375555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:19:13.233136  280719 ssh_runner.go:195] Run: crio config
	I0729 13:19:13.289073  280719 cni.go:84] Creating CNI manager for ""
	I0729 13:19:13.289098  280719 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:19:13.289110  280719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:19:13.289136  280719 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.118 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-375555 NodeName:kubernetes-upgrade-375555 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 13:19:13.289359  280719 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-375555"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:19:13.289445  280719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 13:19:13.300261  280719 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:19:13.300338  280719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:19:13.313153  280719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0729 13:19:13.333604  280719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:19:13.354198  280719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0729 13:19:13.372225  280719 ssh_runner.go:195] Run: grep 192.168.50.118	control-plane.minikube.internal$ /etc/hosts
	I0729 13:19:13.376302  280719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:19:13.389420  280719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:19:13.546576  280719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:19:13.566288  280719 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555 for IP: 192.168.50.118
	I0729 13:19:13.566316  280719 certs.go:194] generating shared ca certs ...
	I0729 13:19:13.566339  280719 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:19:13.566523  280719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:19:13.566586  280719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:19:13.566600  280719 certs.go:256] generating profile certs ...
	I0729 13:19:13.566685  280719 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/client.key
	I0729 13:19:13.566707  280719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/client.crt with IP's: []
	I0729 13:19:13.698072  280719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/client.crt ...
	I0729 13:19:13.698112  280719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/client.crt: {Name:mk62ab9a5175890f276d1f69df0f3405513487f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:19:13.698345  280719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/client.key ...
	I0729 13:19:13.698369  280719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/client.key: {Name:mk58a3046d7b4550fc8a58af5f7da427855a9459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:19:13.698513  280719 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.key.717ebdc1
	I0729 13:19:13.698537  280719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.crt.717ebdc1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.118]
	I0729 13:19:13.813263  280719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.crt.717ebdc1 ...
	I0729 13:19:13.813312  280719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.crt.717ebdc1: {Name:mkc85bea56ba90ea5bcd4b80bddf234ab9a01ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:19:13.813556  280719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.key.717ebdc1 ...
	I0729 13:19:13.813581  280719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.key.717ebdc1: {Name:mkdd74f1e5e7005c0052b6e1a1cf04740ce7c93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:19:13.813689  280719 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.crt.717ebdc1 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.crt
	I0729 13:19:13.813799  280719 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.key.717ebdc1 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.key
	I0729 13:19:13.813877  280719 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/proxy-client.key
	I0729 13:19:13.813897  280719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/proxy-client.crt with IP's: []
	I0729 13:19:14.020726  280719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/proxy-client.crt ...
	I0729 13:19:14.020787  280719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/proxy-client.crt: {Name:mk16ca57c1fbae60b7b94187cee00f1be18bece1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:19:14.021006  280719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/proxy-client.key ...
	I0729 13:19:14.021029  280719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/proxy-client.key: {Name:mk6645187a4b3da963002a4125013100b717749d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:19:14.021241  280719 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:19:14.021279  280719 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:19:14.021289  280719 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:19:14.021314  280719 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:19:14.021339  280719 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:19:14.021368  280719 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:19:14.021405  280719 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:19:14.022196  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:19:14.050449  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:19:14.076420  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:19:14.104395  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:19:14.132847  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 13:19:14.160558  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:19:14.208507  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:19:14.241348  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:19:14.274304  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:19:14.299011  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:19:14.333242  280719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:19:14.362924  280719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:19:14.382768  280719 ssh_runner.go:195] Run: openssl version
	I0729 13:19:14.389390  280719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:19:14.402604  280719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:19:14.407419  280719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:19:14.407483  280719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:19:14.413558  280719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:19:14.430949  280719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:19:14.451125  280719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:19:14.456840  280719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:19:14.456903  280719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:19:14.464878  280719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:19:14.479466  280719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:19:14.494738  280719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:19:14.503261  280719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:19:14.503338  280719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:19:14.512632  280719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:19:14.536730  280719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:19:14.542586  280719 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:19:14.542659  280719 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-375555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-375555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.118 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:19:14.542745  280719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:19:14.542814  280719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:19:14.586099  280719 cri.go:89] found id: ""
	I0729 13:19:14.586211  280719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:19:14.599940  280719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:19:14.612849  280719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:19:14.625589  280719 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:19:14.625652  280719 kubeadm.go:157] found existing configuration files:
	
	I0729 13:19:14.625708  280719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:19:14.635640  280719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:19:14.635704  280719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:19:14.645396  280719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:19:14.654610  280719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:19:14.654664  280719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:19:14.664422  280719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:19:14.673498  280719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:19:14.673544  280719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:19:14.683379  280719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:19:14.692607  280719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:19:14.692677  280719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:19:14.702699  280719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:19:14.834742  280719 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:19:14.834806  280719 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:19:14.992455  280719 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:19:14.992583  280719 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:19:14.992748  280719 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:19:15.172319  280719 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:19:15.174559  280719 out.go:204]   - Generating certificates and keys ...
	I0729 13:19:15.174687  280719 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:19:15.174792  280719 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:19:15.258119  280719 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 13:19:15.581910  280719 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 13:19:15.710647  280719 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 13:19:15.913520  280719 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 13:19:16.022768  280719 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 13:19:16.022973  280719 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-375555 localhost] and IPs [192.168.50.118 127.0.0.1 ::1]
	I0729 13:19:16.098919  280719 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 13:19:16.099121  280719 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-375555 localhost] and IPs [192.168.50.118 127.0.0.1 ::1]
	I0729 13:19:16.172287  280719 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 13:19:16.354898  280719 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 13:19:16.536517  280719 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 13:19:16.536678  280719 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:19:16.805726  280719 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:19:16.939991  280719 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:19:17.037794  280719 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:19:17.238844  280719 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:19:17.274559  280719 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:19:17.275644  280719 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:19:17.275751  280719 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:19:17.409248  280719 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:19:17.411759  280719 out.go:204]   - Booting up control plane ...
	I0729 13:19:17.411866  280719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:19:17.418053  280719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:19:17.420027  280719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:19:17.420133  280719 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:19:17.423905  280719 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:19:57.372713  280719 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:19:57.373130  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:19:57.373386  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:20:02.373639  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:20:02.373879  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:20:12.373146  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:20:12.373382  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:20:32.374351  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:20:32.374604  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:21:12.376131  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:21:12.376418  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:21:12.376469  280719 kubeadm.go:310] 
	I0729 13:21:12.376531  280719 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:21:12.376594  280719 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:21:12.376604  280719 kubeadm.go:310] 
	I0729 13:21:12.376666  280719 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:21:12.376735  280719 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:21:12.376898  280719 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:21:12.376918  280719 kubeadm.go:310] 
	I0729 13:21:12.377044  280719 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:21:12.377090  280719 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:21:12.377142  280719 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:21:12.377152  280719 kubeadm.go:310] 
	I0729 13:21:12.377338  280719 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:21:12.377451  280719 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:21:12.377462  280719 kubeadm.go:310] 
	I0729 13:21:12.377640  280719 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:21:12.377822  280719 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:21:12.377936  280719 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:21:12.378038  280719 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:21:12.378050  280719 kubeadm.go:310] 
	I0729 13:21:12.378414  280719 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:21:12.378533  280719 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:21:12.378622  280719 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 13:21:12.378849  280719 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-375555 localhost] and IPs [192.168.50.118 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-375555 localhost] and IPs [192.168.50.118 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-375555 localhost] and IPs [192.168.50.118 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-375555 localhost] and IPs [192.168.50.118 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 13:21:12.378901  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:21:13.426516  280719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.047589632s)
	I0729 13:21:13.426601  280719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:21:13.441571  280719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:21:13.452209  280719 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:21:13.452237  280719 kubeadm.go:157] found existing configuration files:
	
	I0729 13:21:13.452293  280719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:21:13.461573  280719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:21:13.461625  280719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:21:13.470766  280719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:21:13.479511  280719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:21:13.479565  280719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:21:13.488768  280719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:21:13.497397  280719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:21:13.497451  280719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:21:13.506879  280719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:21:13.515347  280719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:21:13.515389  280719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:21:13.524359  280719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:21:13.738025  280719 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:23:09.774781  280719 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:23:09.774945  280719 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 13:23:09.776879  280719 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:23:09.776935  280719 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:23:09.777019  280719 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:23:09.777136  280719 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:23:09.777248  280719 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:23:09.777326  280719 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:23:09.779524  280719 out.go:204]   - Generating certificates and keys ...
	I0729 13:23:09.779628  280719 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:23:09.779721  280719 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:23:09.779853  280719 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:23:09.779983  280719 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:23:09.780088  280719 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:23:09.780155  280719 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:23:09.780211  280719 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:23:09.780302  280719 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:23:09.780412  280719 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:23:09.780525  280719 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:23:09.780581  280719 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:23:09.780653  280719 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:23:09.780722  280719 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:23:09.780786  280719 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:23:09.780884  280719 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:23:09.780956  280719 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:23:09.781101  280719 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:23:09.781211  280719 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:23:09.781247  280719 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:23:09.781313  280719 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:23:09.782976  280719 out.go:204]   - Booting up control plane ...
	I0729 13:23:09.783067  280719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:23:09.783147  280719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:23:09.783218  280719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:23:09.783314  280719 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:23:09.783513  280719 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:23:09.783587  280719 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:23:09.783692  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:23:09.783926  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:23:09.784002  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:23:09.784213  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:23:09.784313  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:23:09.784523  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:23:09.784624  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:23:09.784916  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:23:09.784999  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:23:09.785254  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:23:09.785266  280719 kubeadm.go:310] 
	I0729 13:23:09.785328  280719 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:23:09.785386  280719 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:23:09.785396  280719 kubeadm.go:310] 
	I0729 13:23:09.785465  280719 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:23:09.785513  280719 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:23:09.785647  280719 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:23:09.785659  280719 kubeadm.go:310] 
	I0729 13:23:09.785812  280719 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:23:09.785865  280719 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:23:09.785908  280719 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:23:09.785918  280719 kubeadm.go:310] 
	I0729 13:23:09.786037  280719 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:23:09.786145  280719 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:23:09.786155  280719 kubeadm.go:310] 
	I0729 13:23:09.786310  280719 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:23:09.786411  280719 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:23:09.786515  280719 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:23:09.786591  280719 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:23:09.786677  280719 kubeadm.go:310] 
	I0729 13:23:09.786679  280719 kubeadm.go:394] duration metric: took 3m55.244027085s to StartCluster
	I0729 13:23:09.786745  280719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:23:09.786818  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:23:09.842864  280719 cri.go:89] found id: ""
	I0729 13:23:09.842893  280719 logs.go:276] 0 containers: []
	W0729 13:23:09.842904  280719 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:23:09.842911  280719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:23:09.842982  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:23:09.887119  280719 cri.go:89] found id: ""
	I0729 13:23:09.887157  280719 logs.go:276] 0 containers: []
	W0729 13:23:09.887171  280719 logs.go:278] No container was found matching "etcd"
	I0729 13:23:09.887181  280719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:23:09.887253  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:23:09.936955  280719 cri.go:89] found id: ""
	I0729 13:23:09.936984  280719 logs.go:276] 0 containers: []
	W0729 13:23:09.936995  280719 logs.go:278] No container was found matching "coredns"
	I0729 13:23:09.937002  280719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:23:09.937068  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:23:09.986449  280719 cri.go:89] found id: ""
	I0729 13:23:09.986484  280719 logs.go:276] 0 containers: []
	W0729 13:23:09.986496  280719 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:23:09.986504  280719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:23:09.986575  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:23:10.035098  280719 cri.go:89] found id: ""
	I0729 13:23:10.035131  280719 logs.go:276] 0 containers: []
	W0729 13:23:10.035143  280719 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:23:10.035151  280719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:23:10.035222  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:23:10.081348  280719 cri.go:89] found id: ""
	I0729 13:23:10.081381  280719 logs.go:276] 0 containers: []
	W0729 13:23:10.081394  280719 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:23:10.081402  280719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:23:10.081467  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:23:10.124531  280719 cri.go:89] found id: ""
	I0729 13:23:10.124575  280719 logs.go:276] 0 containers: []
	W0729 13:23:10.124587  280719 logs.go:278] No container was found matching "kindnet"
	I0729 13:23:10.124600  280719 logs.go:123] Gathering logs for dmesg ...
	I0729 13:23:10.124617  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:23:10.162352  280719 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:23:10.162399  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:23:10.329830  280719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:23:10.329865  280719 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:23:10.329888  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:23:10.441461  280719 logs.go:123] Gathering logs for container status ...
	I0729 13:23:10.441514  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:23:10.495788  280719 logs.go:123] Gathering logs for kubelet ...
	I0729 13:23:10.495823  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 13:23:10.557689  280719 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 13:23:10.557746  280719 out.go:239] * 
	* 
	W0729 13:23:10.557803  280719 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:23:10.557825  280719 out.go:239] * 
	* 
	W0729 13:23:10.558736  280719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:23:10.562723  280719 out.go:177] 
	W0729 13:23:10.564091  280719 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:23:10.564168  280719 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 13:23:10.564202  280719 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 13:23:10.565735  280719 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-375555 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-375555
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-375555: (6.40517704s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-375555 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-375555 status --format={{.Host}}: exit status 7 (81.078205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-375555 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-375555 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.628438071s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-375555 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-375555 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-375555 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.564603ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-375555] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-375555
	    minikube start -p kubernetes-upgrade-375555 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3755552 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-375555 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-375555 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0729 13:24:27.880944  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-375555 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.284258612s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-29 13:25:22.172435352 +0000 UTC m=+5000.435790163
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-375555 -n kubernetes-upgrade-375555
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-375555 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-375555 logs -n 25: (2.015239142s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p NoKubernetes-225538                | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p NoKubernetes-225538                | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:21 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-614412             | running-upgrade-614412    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p pause-220574 --memory=2048         | pause-220574              | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:22 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-938122             | stopped-upgrade-938122    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p cert-expiration-168661             | cert-expiration-168661    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:22 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-225538 sudo           | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:21 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-225538                | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:21 UTC | 29 Jul 24 13:21 UTC |
	| start   | -p force-systemd-flag-454180          | force-systemd-flag-454180 | jenkins | v1.33.1 | 29 Jul 24 13:21 UTC | 29 Jul 24 13:22 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-220574                       | pause-220574              | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC | 29 Jul 24 13:23 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-454180 ssh cat     | force-systemd-flag-454180 | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC | 29 Jul 24 13:22 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-454180          | force-systemd-flag-454180 | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC | 29 Jul 24 13:22 UTC |
	| start   | -p cert-options-606292                | cert-options-606292       | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC | 29 Jul 24 13:23 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-375555          | kubernetes-upgrade-375555 | jenkins | v1.33.1 | 29 Jul 24 13:23 UTC | 29 Jul 24 13:23 UTC |
	| delete  | -p pause-220574                       | pause-220574              | jenkins | v1.33.1 | 29 Jul 24 13:23 UTC | 29 Jul 24 13:23 UTC |
	| start   | -p kubernetes-upgrade-375555          | kubernetes-upgrade-375555 | jenkins | v1.33.1 | 29 Jul 24 13:23 UTC | 29 Jul 24 13:24 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-606292 ssh               | cert-options-606292       | jenkins | v1.33.1 | 29 Jul 24 13:23 UTC | 29 Jul 24 13:23 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-606292 -- sudo        | cert-options-606292       | jenkins | v1.33.1 | 29 Jul 24 13:23 UTC | 29 Jul 24 13:23 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-606292                | cert-options-606292       | jenkins | v1.33.1 | 29 Jul 24 13:23 UTC | 29 Jul 24 13:23 UTC |
	| start   | -p auto-507612 --memory=3072          | auto-507612               | jenkins | v1.33.1 | 29 Jul 24 13:23 UTC | 29 Jul 24 13:25 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kindnet-507612                     | kindnet-507612            | jenkins | v1.33.1 | 29 Jul 24 13:23 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-375555          | kubernetes-upgrade-375555 | jenkins | v1.33.1 | 29 Jul 24 13:24 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-375555          | kubernetes-upgrade-375555 | jenkins | v1.33.1 | 29 Jul 24 13:24 UTC | 29 Jul 24 13:25 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-507612 pgrep -a               | auto-507612               | jenkins | v1.33.1 | 29 Jul 24 13:25 UTC | 29 Jul 24 13:25 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| start   | -p cert-expiration-168661             | cert-expiration-168661    | jenkins | v1.33.1 | 29 Jul 24 13:25 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:25:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:25:11.687383  286560 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:25:11.687483  286560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:25:11.687486  286560 out.go:304] Setting ErrFile to fd 2...
	I0729 13:25:11.687489  286560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:25:11.687659  286560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:25:11.688207  286560 out.go:298] Setting JSON to false
	I0729 13:25:11.689238  286560 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11255,"bootTime":1722248257,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:25:11.689294  286560 start.go:139] virtualization: kvm guest
	I0729 13:25:11.691984  286560 out.go:177] * [cert-expiration-168661] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:25:11.693696  286560 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:25:11.693704  286560 notify.go:220] Checking for updates...
	I0729 13:25:11.696586  286560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:25:11.698118  286560 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:25:11.699540  286560 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:25:11.700876  286560 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:25:11.702115  286560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:25:11.703735  286560 config.go:182] Loaded profile config "cert-expiration-168661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:25:11.704137  286560 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:11.704180  286560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:11.719169  286560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41959
	I0729 13:25:11.719593  286560 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:11.720237  286560 main.go:141] libmachine: Using API Version  1
	I0729 13:25:11.720261  286560 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:11.720643  286560 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:11.720910  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .DriverName
	I0729 13:25:11.721175  286560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:25:11.721515  286560 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:11.721548  286560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:11.736120  286560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I0729 13:25:11.736567  286560 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:11.737149  286560 main.go:141] libmachine: Using API Version  1
	I0729 13:25:11.737168  286560 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:11.737498  286560 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:11.737670  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .DriverName
	I0729 13:25:11.773512  286560 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:25:11.775006  286560 start.go:297] selected driver: kvm2
	I0729 13:25:11.775015  286560 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-168661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:cert-expiration-168661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:25:11.775110  286560 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:25:11.775743  286560 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:25:11.775818  286560 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:25:11.791151  286560 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:25:11.791462  286560 cni.go:84] Creating CNI manager for ""
	I0729 13:25:11.791470  286560 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:25:11.791529  286560 start.go:340] cluster config:
	{Name:cert-expiration-168661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-expiration-168661 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:25:11.791637  286560 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:25:11.793573  286560 out.go:177] * Starting "cert-expiration-168661" primary control-plane node in "cert-expiration-168661" cluster
	I0729 13:25:11.794920  286560 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:25:11.794971  286560 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:25:11.794982  286560 cache.go:56] Caching tarball of preloaded images
	I0729 13:25:11.795061  286560 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:25:11.795067  286560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:25:11.795157  286560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-expiration-168661/config.json ...
	I0729 13:25:11.795330  286560 start.go:360] acquireMachinesLock for cert-expiration-168661: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:25:11.795364  286560 start.go:364] duration metric: took 22.53µs to acquireMachinesLock for "cert-expiration-168661"
	I0729 13:25:11.795375  286560 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:25:11.795378  286560 fix.go:54] fixHost starting: 
	I0729 13:25:11.795626  286560 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:11.795651  286560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:11.811466  286560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I0729 13:25:11.811939  286560 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:11.812400  286560 main.go:141] libmachine: Using API Version  1
	I0729 13:25:11.812417  286560 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:11.812772  286560 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:11.813019  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .DriverName
	I0729 13:25:11.813170  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetState
	I0729 13:25:11.814907  286560 fix.go:112] recreateIfNeeded on cert-expiration-168661: state=Running err=<nil>
	W0729 13:25:11.814919  286560 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:25:11.816820  286560 out.go:177] * Updating the running kvm2 "cert-expiration-168661" VM ...
	I0729 13:25:09.517695  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:10.017428  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:10.517832  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:11.017177  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:11.517264  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:12.017806  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:12.516920  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:13.017507  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:13.517304  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:14.016928  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:14.517239  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:15.017702  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:15.517666  285668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:25:15.600690  285668 kubeadm.go:1113] duration metric: took 12.308665894s to wait for elevateKubeSystemPrivileges
	I0729 13:25:15.600728  285668 kubeadm.go:394] duration metric: took 23.799992987s to StartCluster
	I0729 13:25:15.600745  285668 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:25:15.600851  285668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:25:15.602353  285668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:25:15.602609  285668 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:25:15.602652  285668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 13:25:15.602675  285668 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:25:15.602748  285668 addons.go:69] Setting storage-provisioner=true in profile "kindnet-507612"
	I0729 13:25:15.602781  285668 addons.go:234] Setting addon storage-provisioner=true in "kindnet-507612"
	I0729 13:25:15.602825  285668 host.go:66] Checking if "kindnet-507612" exists ...
	I0729 13:25:15.602823  285668 addons.go:69] Setting default-storageclass=true in profile "kindnet-507612"
	I0729 13:25:15.602891  285668 config.go:182] Loaded profile config "kindnet-507612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:25:15.602911  285668 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-507612"
	I0729 13:25:15.603159  285668 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:15.603197  285668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:15.603337  285668 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:15.603386  285668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:15.604528  285668 out.go:177] * Verifying Kubernetes components...
	I0729 13:25:15.606139  285668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:25:15.622493  285668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0729 13:25:15.622516  285668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0729 13:25:15.623205  285668 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:15.623596  285668 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:15.623822  285668 main.go:141] libmachine: Using API Version  1
	I0729 13:25:15.623841  285668 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:15.624205  285668 main.go:141] libmachine: Using API Version  1
	I0729 13:25:15.624229  285668 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:15.624239  285668 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:15.624480  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetState
	I0729 13:25:15.624625  285668 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:15.625274  285668 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:15.625359  285668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:15.628109  285668 addons.go:234] Setting addon default-storageclass=true in "kindnet-507612"
	I0729 13:25:15.628148  285668 host.go:66] Checking if "kindnet-507612" exists ...
	I0729 13:25:15.628395  285668 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:15.628427  285668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:15.642519  285668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I0729 13:25:15.643025  285668 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:15.643170  285668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0729 13:25:15.643608  285668 main.go:141] libmachine: Using API Version  1
	I0729 13:25:15.643642  285668 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:15.643699  285668 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:15.644056  285668 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:15.644193  285668 main.go:141] libmachine: Using API Version  1
	I0729 13:25:15.644218  285668 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:15.644263  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetState
	I0729 13:25:15.644603  285668 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:15.645264  285668 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:15.645313  285668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:15.646082  285668 main.go:141] libmachine: (kindnet-507612) Calling .DriverName
	I0729 13:25:15.648244  285668 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:25:11.818027  286560 machine.go:94] provisionDockerMachine start ...
	I0729 13:25:11.818040  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .DriverName
	I0729 13:25:11.818254  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHHostname
	I0729 13:25:11.821061  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:11.821465  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:11.821499  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:11.821660  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHPort
	I0729 13:25:11.821837  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:11.821973  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:11.822122  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHUsername
	I0729 13:25:11.822325  286560 main.go:141] libmachine: Using SSH client type: native
	I0729 13:25:11.822488  286560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.100 22 <nil> <nil>}
	I0729 13:25:11.822498  286560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:25:11.938767  286560 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-168661
	
	I0729 13:25:11.938790  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetMachineName
	I0729 13:25:11.939093  286560 buildroot.go:166] provisioning hostname "cert-expiration-168661"
	I0729 13:25:11.939117  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetMachineName
	I0729 13:25:11.939338  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHHostname
	I0729 13:25:11.942296  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:11.942712  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:11.942730  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:11.943016  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHPort
	I0729 13:25:11.943194  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:11.943312  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:11.943467  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHUsername
	I0729 13:25:11.943609  286560 main.go:141] libmachine: Using SSH client type: native
	I0729 13:25:11.943790  286560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.100 22 <nil> <nil>}
	I0729 13:25:11.943798  286560 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-168661 && echo "cert-expiration-168661" | sudo tee /etc/hostname
	I0729 13:25:12.074753  286560 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-168661
	
	I0729 13:25:12.074796  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHHostname
	I0729 13:25:12.077880  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:12.078247  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:12.078268  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:12.078486  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHPort
	I0729 13:25:12.078698  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:12.078874  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:12.079000  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHUsername
	I0729 13:25:12.079139  286560 main.go:141] libmachine: Using SSH client type: native
	I0729 13:25:12.079323  286560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.100 22 <nil> <nil>}
	I0729 13:25:12.079334  286560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-168661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-168661/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-168661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:25:12.198477  286560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:25:12.198496  286560 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:25:12.198512  286560 buildroot.go:174] setting up certificates
	I0729 13:25:12.198520  286560 provision.go:84] configureAuth start
	I0729 13:25:12.198528  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetMachineName
	I0729 13:25:12.198882  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetIP
	I0729 13:25:12.201750  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:12.202071  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:12.202090  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:12.202259  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHHostname
	I0729 13:25:12.204540  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:12.204878  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:12.204901  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:12.205048  286560 provision.go:143] copyHostCerts
	I0729 13:25:12.205106  286560 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:25:12.205114  286560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:25:12.205183  286560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:25:12.205302  286560 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:25:12.205307  286560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:25:12.205340  286560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:25:12.205417  286560 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:25:12.205421  286560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:25:12.205445  286560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:25:12.205516  286560 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-168661 san=[127.0.0.1 192.168.61.100 cert-expiration-168661 localhost minikube]
	I0729 13:25:12.323102  286560 provision.go:177] copyRemoteCerts
	I0729 13:25:12.323187  286560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:25:12.323224  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHHostname
	I0729 13:25:12.326415  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:12.326791  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:12.326809  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:12.326985  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHPort
	I0729 13:25:12.327211  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:12.327444  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHUsername
	I0729 13:25:12.327621  286560 sshutil.go:53] new ssh client: &{IP:192.168.61.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-expiration-168661/id_rsa Username:docker}
	I0729 13:25:12.417045  286560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 13:25:12.444934  286560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:25:12.474908  286560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:25:12.508772  286560 provision.go:87] duration metric: took 310.235596ms to configureAuth
	I0729 13:25:12.508811  286560 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:25:12.509025  286560 config.go:182] Loaded profile config "cert-expiration-168661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:25:12.509112  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHHostname
	I0729 13:25:12.511651  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:12.512075  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:12.512097  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:12.512236  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHPort
	I0729 13:25:12.512440  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:12.512642  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:12.512820  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHUsername
	I0729 13:25:12.512998  286560 main.go:141] libmachine: Using SSH client type: native
	I0729 13:25:12.513186  286560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.100 22 <nil> <nil>}
	I0729 13:25:12.513195  286560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:25:15.649634  285668 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:25:15.649658  285668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:25:15.649683  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetSSHHostname
	I0729 13:25:15.652917  285668 main.go:141] libmachine: (kindnet-507612) DBG | domain kindnet-507612 has defined MAC address 52:54:00:83:98:72 in network mk-kindnet-507612
	I0729 13:25:15.653521  285668 main.go:141] libmachine: (kindnet-507612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:98:72", ip: ""} in network mk-kindnet-507612: {Iface:virbr4 ExpiryTime:2024-07-29 14:24:34 +0000 UTC Type:0 Mac:52:54:00:83:98:72 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:kindnet-507612 Clientid:01:52:54:00:83:98:72}
	I0729 13:25:15.653547  285668 main.go:141] libmachine: (kindnet-507612) DBG | domain kindnet-507612 has defined IP address 192.168.72.225 and MAC address 52:54:00:83:98:72 in network mk-kindnet-507612
	I0729 13:25:15.653801  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetSSHPort
	I0729 13:25:15.653976  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetSSHKeyPath
	I0729 13:25:15.654117  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetSSHUsername
	I0729 13:25:15.654239  285668 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/kindnet-507612/id_rsa Username:docker}
	I0729 13:25:15.662440  285668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0729 13:25:15.662971  285668 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:15.663491  285668 main.go:141] libmachine: Using API Version  1
	I0729 13:25:15.663518  285668 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:15.663944  285668 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:15.664196  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetState
	I0729 13:25:15.666016  285668 main.go:141] libmachine: (kindnet-507612) Calling .DriverName
	I0729 13:25:15.666272  285668 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:25:15.666290  285668 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:25:15.666309  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetSSHHostname
	I0729 13:25:15.669044  285668 main.go:141] libmachine: (kindnet-507612) DBG | domain kindnet-507612 has defined MAC address 52:54:00:83:98:72 in network mk-kindnet-507612
	I0729 13:25:15.669494  285668 main.go:141] libmachine: (kindnet-507612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:98:72", ip: ""} in network mk-kindnet-507612: {Iface:virbr4 ExpiryTime:2024-07-29 14:24:34 +0000 UTC Type:0 Mac:52:54:00:83:98:72 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:kindnet-507612 Clientid:01:52:54:00:83:98:72}
	I0729 13:25:15.669519  285668 main.go:141] libmachine: (kindnet-507612) DBG | domain kindnet-507612 has defined IP address 192.168.72.225 and MAC address 52:54:00:83:98:72 in network mk-kindnet-507612
	I0729 13:25:15.669751  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetSSHPort
	I0729 13:25:15.669949  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetSSHKeyPath
	I0729 13:25:15.670140  285668 main.go:141] libmachine: (kindnet-507612) Calling .GetSSHUsername
	I0729 13:25:15.670317  285668 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/kindnet-507612/id_rsa Username:docker}
	I0729 13:25:15.807545  285668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 13:25:15.830485  285668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:25:15.987816  285668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:25:16.021989  285668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:25:16.352572  285668 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0729 13:25:16.352866  285668 main.go:141] libmachine: Making call to close driver server
	I0729 13:25:16.352888  285668 main.go:141] libmachine: (kindnet-507612) Calling .Close
	I0729 13:25:16.353254  285668 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:25:16.353265  285668 main.go:141] libmachine: (kindnet-507612) DBG | Closing plugin on server side
	I0729 13:25:16.353276  285668 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:25:16.353287  285668 main.go:141] libmachine: Making call to close driver server
	I0729 13:25:16.353296  285668 main.go:141] libmachine: (kindnet-507612) Calling .Close
	I0729 13:25:16.353573  285668 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:25:16.353583  285668 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:25:16.355445  285668 node_ready.go:35] waiting up to 15m0s for node "kindnet-507612" to be "Ready" ...
	I0729 13:25:16.415929  285668 main.go:141] libmachine: Making call to close driver server
	I0729 13:25:16.415962  285668 main.go:141] libmachine: (kindnet-507612) Calling .Close
	I0729 13:25:16.416271  285668 main.go:141] libmachine: (kindnet-507612) DBG | Closing plugin on server side
	I0729 13:25:16.416381  285668 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:25:16.416398  285668 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 13:25:16.418872  285668 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "kindnet-507612" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0729 13:25:16.418895  285668 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0729 13:25:16.726106  285668 main.go:141] libmachine: Making call to close driver server
	I0729 13:25:16.726137  285668 main.go:141] libmachine: (kindnet-507612) Calling .Close
	I0729 13:25:16.726423  285668 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:25:16.726442  285668 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:25:16.726450  285668 main.go:141] libmachine: Making call to close driver server
	I0729 13:25:16.726457  285668 main.go:141] libmachine: (kindnet-507612) Calling .Close
	I0729 13:25:16.726490  285668 main.go:141] libmachine: (kindnet-507612) DBG | Closing plugin on server side
	I0729 13:25:16.726694  285668 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:25:16.726713  285668 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:25:16.728523  285668 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 13:25:14.045884  286026 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 c0f25bdf2f04fbf05522d596270e80b51812623ca183a4a16bb809e168e383e3 abdd2a5a06c3f3c3ced9e84743822a33b63573c85cbfd8150aab7bc9d8b91279 81127feb4a1cb46a800e23839e5ec87a31e5eee5ef9a3e840920a7145d43e491 39c2b5a4c075a1373ca56f4827e5877f2bbe386b26497dac3b2dc9985c32f0af 2c356e65e7361564cf9d9fd523277c45a9528ee47cf03b31567437176399f68e bbc6c7cec77398f63eaf7636abf0c75e5dcc70e73d8bd6eda59136e1b171863d 1a7b1fc0cdaf90cc08bd80fc4612f3c63af52e86fd2afdf88238012e25b92bb9 2dfd8e13fa66410cbfec1587eb036427aefe684e7fe3e6e4ae455085b21487af 0cce487107b817b498f80af5801d71fdc016df48a97693db92eb169d69a8f6b7 1c283f4632c9b422441d449b3bc60eba35857d93130146cd8788026e53086e7c: (14.983372579s)
	W0729 13:25:14.045974  286026 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 c0f25bdf2f04fbf05522d596270e80b51812623ca183a4a16bb809e168e383e3 abdd2a5a06c3f3c3ced9e84743822a33b63573c85cbfd8150aab7bc9d8b91279 81127feb4a1cb46a800e23839e5ec87a31e5eee5ef9a3e840920a7145d43e491 39c2b5a4c075a1373ca56f4827e5877f2bbe386b26497dac3b2dc9985c32f0af 2c356e65e7361564cf9d9fd523277c45a9528ee47cf03b31567437176399f68e bbc6c7cec77398f63eaf7636abf0c75e5dcc70e73d8bd6eda59136e1b171863d 1a7b1fc0cdaf90cc08bd80fc4612f3c63af52e86fd2afdf88238012e25b92bb9 2dfd8e13fa66410cbfec1587eb036427aefe684e7fe3e6e4ae455085b21487af 0cce487107b817b498f80af5801d71fdc016df48a97693db92eb169d69a8f6b7 1c283f4632c9b422441d449b3bc60eba35857d93130146cd8788026e53086e7c: Process exited with status 1
	stdout:
	c0f25bdf2f04fbf05522d596270e80b51812623ca183a4a16bb809e168e383e3
	abdd2a5a06c3f3c3ced9e84743822a33b63573c85cbfd8150aab7bc9d8b91279
	
	stderr:
	E0729 13:25:14.036304    3833 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81127feb4a1cb46a800e23839e5ec87a31e5eee5ef9a3e840920a7145d43e491\": container with ID starting with 81127feb4a1cb46a800e23839e5ec87a31e5eee5ef9a3e840920a7145d43e491 not found: ID does not exist" containerID="81127feb4a1cb46a800e23839e5ec87a31e5eee5ef9a3e840920a7145d43e491"
	time="2024-07-29T13:25:14Z" level=fatal msg="stopping the container \"81127feb4a1cb46a800e23839e5ec87a31e5eee5ef9a3e840920a7145d43e491\": rpc error: code = NotFound desc = could not find container \"81127feb4a1cb46a800e23839e5ec87a31e5eee5ef9a3e840920a7145d43e491\": container with ID starting with 81127feb4a1cb46a800e23839e5ec87a31e5eee5ef9a3e840920a7145d43e491 not found: ID does not exist"
	I0729 13:25:14.046064  286026 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:25:14.102217  286026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:25:14.113304  286026 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Jul 29 13:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jul 29 13:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Jul 29 13:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul 29 13:24 /etc/kubernetes/scheduler.conf
	
	I0729 13:25:14.113377  286026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:25:14.123353  286026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:25:14.134398  286026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:25:14.144223  286026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:25:14.144309  286026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:25:14.154252  286026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:25:14.163671  286026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:25:14.163728  286026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:25:14.173210  286026 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:25:14.182884  286026 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:25:14.238769  286026 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:25:15.000171  286026 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:25:15.275369  286026 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:25:15.356984  286026 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:25:15.490124  286026 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:25:15.490216  286026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:25:15.991098  286026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:25:16.491186  286026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:25:16.506998  286026 api_server.go:72] duration metric: took 1.016860332s to wait for apiserver process to appear ...
	I0729 13:25:16.507032  286026 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:25:16.507059  286026 api_server.go:253] Checking apiserver healthz at https://192.168.50.118:8443/healthz ...
	I0729 13:25:16.729705  285668 addons.go:510] duration metric: took 1.127036804s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 13:25:18.359394  285668 node_ready.go:53] node "kindnet-507612" has status "Ready":"False"
	I0729 13:25:18.374937  286026 api_server.go:279] https://192.168.50.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:25:18.374971  286026 api_server.go:103] status: https://192.168.50.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:25:18.374984  286026 api_server.go:253] Checking apiserver healthz at https://192.168.50.118:8443/healthz ...
	I0729 13:25:18.451805  286026 api_server.go:279] https://192.168.50.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:25:18.451854  286026 api_server.go:103] status: https://192.168.50.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:25:18.508023  286026 api_server.go:253] Checking apiserver healthz at https://192.168.50.118:8443/healthz ...
	I0729 13:25:18.529779  286026 api_server.go:279] https://192.168.50.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:25:18.529816  286026 api_server.go:103] status: https://192.168.50.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:25:19.007809  286026 api_server.go:253] Checking apiserver healthz at https://192.168.50.118:8443/healthz ...
	I0729 13:25:19.018012  286026 api_server.go:279] https://192.168.50.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:25:19.018050  286026 api_server.go:103] status: https://192.168.50.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:25:19.507473  286026 api_server.go:253] Checking apiserver healthz at https://192.168.50.118:8443/healthz ...
	I0729 13:25:19.521319  286026 api_server.go:279] https://192.168.50.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:25:19.521354  286026 api_server.go:103] status: https://192.168.50.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:25:20.007162  286026 api_server.go:253] Checking apiserver healthz at https://192.168.50.118:8443/healthz ...
	I0729 13:25:20.012104  286026 api_server.go:279] https://192.168.50.118:8443/healthz returned 200:
	ok
	I0729 13:25:20.019191  286026 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:25:20.019225  286026 api_server.go:131] duration metric: took 3.512183959s to wait for apiserver health ...
	I0729 13:25:20.019237  286026 cni.go:84] Creating CNI manager for ""
	I0729 13:25:20.019247  286026 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:25:20.020979  286026 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:25:18.181145  286560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:25:18.181164  286560 machine.go:97] duration metric: took 6.363127056s to provisionDockerMachine
	I0729 13:25:18.181176  286560 start.go:293] postStartSetup for "cert-expiration-168661" (driver="kvm2")
	I0729 13:25:18.181189  286560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:25:18.181209  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .DriverName
	I0729 13:25:18.181691  286560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:25:18.181719  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHHostname
	I0729 13:25:18.185275  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:18.185679  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:18.185699  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:18.185943  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHPort
	I0729 13:25:18.186165  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:18.186338  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHUsername
	I0729 13:25:18.186499  286560 sshutil.go:53] new ssh client: &{IP:192.168.61.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-expiration-168661/id_rsa Username:docker}
	I0729 13:25:18.286722  286560 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:25:18.292501  286560 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:25:18.292522  286560 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:25:18.292582  286560 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:25:18.292698  286560 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:25:18.292833  286560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:25:18.304541  286560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:25:18.337405  286560 start.go:296] duration metric: took 156.211292ms for postStartSetup
	I0729 13:25:18.337440  286560 fix.go:56] duration metric: took 6.542060456s for fixHost
	I0729 13:25:18.337463  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHHostname
	I0729 13:25:18.340615  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:18.341065  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:18.341103  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:18.341480  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHPort
	I0729 13:25:18.341732  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:18.341940  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:18.342143  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHUsername
	I0729 13:25:18.342351  286560 main.go:141] libmachine: Using SSH client type: native
	I0729 13:25:18.342573  286560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.100 22 <nil> <nil>}
	I0729 13:25:18.342581  286560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:25:18.465770  286560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259518.457552503
	
	I0729 13:25:18.465784  286560 fix.go:216] guest clock: 1722259518.457552503
	I0729 13:25:18.465792  286560 fix.go:229] Guest: 2024-07-29 13:25:18.457552503 +0000 UTC Remote: 2024-07-29 13:25:18.337443927 +0000 UTC m=+6.684472841 (delta=120.108576ms)
	I0729 13:25:18.465817  286560 fix.go:200] guest clock delta is within tolerance: 120.108576ms
	I0729 13:25:18.465823  286560 start.go:83] releasing machines lock for "cert-expiration-168661", held for 6.670452882s
	I0729 13:25:18.465847  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .DriverName
	I0729 13:25:18.466172  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetIP
	I0729 13:25:18.469014  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:18.469403  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:18.469426  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:18.469598  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .DriverName
	I0729 13:25:18.470248  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .DriverName
	I0729 13:25:18.470431  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .DriverName
	I0729 13:25:18.470516  286560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:25:18.470550  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHHostname
	I0729 13:25:18.470637  286560 ssh_runner.go:195] Run: cat /version.json
	I0729 13:25:18.470649  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHHostname
	I0729 13:25:18.474580  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:18.474595  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:18.474626  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:18.474639  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:18.474820  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHPort
	I0729 13:25:18.475046  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:18.475216  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHUsername
	I0729 13:25:18.475291  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:18.475304  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:18.475475  286560 sshutil.go:53] new ssh client: &{IP:192.168.61.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-expiration-168661/id_rsa Username:docker}
	I0729 13:25:18.475797  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHPort
	I0729 13:25:18.475953  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHKeyPath
	I0729 13:25:18.476176  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetSSHUsername
	I0729 13:25:18.476314  286560 sshutil.go:53] new ssh client: &{IP:192.168.61.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-expiration-168661/id_rsa Username:docker}
	I0729 13:25:18.589518  286560 ssh_runner.go:195] Run: systemctl --version
	I0729 13:25:18.596775  286560 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:25:18.774340  286560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:25:18.781802  286560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:25:18.781870  286560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:25:18.795542  286560 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 13:25:18.795562  286560 start.go:495] detecting cgroup driver to use...
	I0729 13:25:18.795650  286560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:25:18.816008  286560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:25:18.830958  286560 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:25:18.831015  286560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:25:18.845853  286560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:25:18.861471  286560 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:25:19.008423  286560 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:25:19.149542  286560 docker.go:233] disabling docker service ...
	I0729 13:25:19.149609  286560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:25:19.166544  286560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:25:19.180935  286560 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:25:19.317251  286560 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:25:19.470306  286560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:25:19.492327  286560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:25:19.517028  286560 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:25:19.517085  286560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:25:19.528053  286560 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:25:19.528107  286560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:25:19.539228  286560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:25:19.549857  286560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:25:19.560546  286560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:25:19.573058  286560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:25:19.584144  286560 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:25:19.596068  286560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:25:19.606686  286560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:25:19.616724  286560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:25:19.626578  286560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:25:19.789858  286560 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:25:20.072546  286560 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:25:20.072613  286560 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:25:20.079181  286560 start.go:563] Will wait 60s for crictl version
	I0729 13:25:20.079249  286560 ssh_runner.go:195] Run: which crictl
	I0729 13:25:20.084853  286560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:25:20.138061  286560 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:25:20.138142  286560 ssh_runner.go:195] Run: crio --version
	I0729 13:25:20.174908  286560 ssh_runner.go:195] Run: crio --version
	I0729 13:25:20.209105  286560 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:25:20.022648  286026 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:25:20.036620  286026 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:25:20.061085  286026 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:25:20.073486  286026 system_pods.go:59] 8 kube-system pods found
	I0729 13:25:20.073543  286026 system_pods.go:61] "coredns-5cfdc65f69-7stkm" [43e570b9-d2b4-4310-b7b5-4eab34adc337] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:25:20.073556  286026 system_pods.go:61] "coredns-5cfdc65f69-vvst8" [fbf7d002-4da9-4eb7-943b-29188a49469f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:25:20.073567  286026 system_pods.go:61] "etcd-kubernetes-upgrade-375555" [b60ac800-1884-475c-b8b8-b2c830447ba6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:25:20.073578  286026 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-375555" [d015f310-6c83-4cb4-a6ac-d25987be65b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:25:20.073596  286026 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-375555" [06bea577-0974-48e5-9817-82a06a65b879] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:25:20.073604  286026 system_pods.go:61] "kube-proxy-xlfdr" [edc46954-3475-4e5a-9778-51d9324372ae] Running
	I0729 13:25:20.073613  286026 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-375555" [a94e4507-e168-49a3-8a52-2d686e9155d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:25:20.073621  286026 system_pods.go:61] "storage-provisioner" [7218505f-dd3e-4af6-8cfa-3cb6396ae18b] Running
	I0729 13:25:20.073629  286026 system_pods.go:74] duration metric: took 12.519702ms to wait for pod list to return data ...
	I0729 13:25:20.073642  286026 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:25:20.078557  286026 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:25:20.078587  286026 node_conditions.go:123] node cpu capacity is 2
	I0729 13:25:20.078600  286026 node_conditions.go:105] duration metric: took 4.951828ms to run NodePressure ...
	I0729 13:25:20.078621  286026 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:25:20.535112  286026 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:25:20.549163  286026 ops.go:34] apiserver oom_adj: -16
	I0729 13:25:20.549192  286026 kubeadm.go:597] duration metric: took 21.561450882s to restartPrimaryControlPlane
	I0729 13:25:20.549218  286026 kubeadm.go:394] duration metric: took 21.746795494s to StartCluster
	I0729 13:25:20.549241  286026 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:25:20.549348  286026 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:25:20.551473  286026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:25:20.551756  286026 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.118 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:25:20.551816  286026 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:25:20.551890  286026 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-375555"
	I0729 13:25:20.551925  286026 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-375555"
	W0729 13:25:20.551938  286026 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:25:20.551972  286026 host.go:66] Checking if "kubernetes-upgrade-375555" exists ...
	I0729 13:25:20.551988  286026 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-375555"
	I0729 13:25:20.552030  286026 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-375555"
	I0729 13:25:20.552156  286026 config.go:182] Loaded profile config "kubernetes-upgrade-375555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:25:20.552387  286026 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:20.552429  286026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:20.552389  286026 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:20.552497  286026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:20.554784  286026 out.go:177] * Verifying Kubernetes components...
	I0729 13:25:20.556291  286026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:25:20.574386  286026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0729 13:25:20.574908  286026 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:20.575520  286026 main.go:141] libmachine: Using API Version  1
	I0729 13:25:20.575539  286026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:20.575910  286026 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:20.576090  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetState
	I0729 13:25:20.579500  286026 kapi.go:59] client config for kubernetes-upgrade-375555: &rest.Config{Host:"https://192.168.50.118:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/client.crt", KeyFile:"/home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kubernetes-upgrade-375555/client.key", CAFile:"/home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 13:25:20.579818  286026 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-375555"
	W0729 13:25:20.579831  286026 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:25:20.579862  286026 host.go:66] Checking if "kubernetes-upgrade-375555" exists ...
	I0729 13:25:20.580271  286026 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:20.580305  286026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:20.580510  286026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0729 13:25:20.585261  286026 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:20.585745  286026 main.go:141] libmachine: Using API Version  1
	I0729 13:25:20.585765  286026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:20.586218  286026 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:20.586755  286026 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:20.586794  286026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:20.598385  286026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0729 13:25:20.599014  286026 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:20.599679  286026 main.go:141] libmachine: Using API Version  1
	I0729 13:25:20.599700  286026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:20.600198  286026 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:20.601049  286026 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:25:20.601095  286026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:25:20.611301  286026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I0729 13:25:20.616960  286026 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:20.621056  286026 main.go:141] libmachine: Using API Version  1
	I0729 13:25:20.621088  286026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:20.621800  286026 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:20.624919  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetState
	I0729 13:25:20.625010  286026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42061
	I0729 13:25:20.625475  286026 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:25:20.626014  286026 main.go:141] libmachine: Using API Version  1
	I0729 13:25:20.626047  286026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:25:20.626467  286026 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:25:20.626686  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetState
	I0729 13:25:20.628096  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .DriverName
	I0729 13:25:20.628900  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .DriverName
	I0729 13:25:20.629173  286026 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:25:20.629196  286026 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:25:20.629224  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:25:20.630628  286026 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:25:20.210448  286560 main.go:141] libmachine: (cert-expiration-168661) Calling .GetIP
	I0729 13:25:20.213323  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:20.213702  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:26:31", ip: ""} in network mk-cert-expiration-168661: {Iface:virbr3 ExpiryTime:2024-07-29 14:21:39 +0000 UTC Type:0 Mac:52:54:00:7d:26:31 Iaid: IPaddr:192.168.61.100 Prefix:24 Hostname:cert-expiration-168661 Clientid:01:52:54:00:7d:26:31}
	I0729 13:25:20.213725  286560 main.go:141] libmachine: (cert-expiration-168661) DBG | domain cert-expiration-168661 has defined IP address 192.168.61.100 and MAC address 52:54:00:7d:26:31 in network mk-cert-expiration-168661
	I0729 13:25:20.214002  286560 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 13:25:20.218997  286560 kubeadm.go:883] updating cluster {Name:cert-expiration-168661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.3 ClusterName:cert-expiration-168661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:25:20.219113  286560 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:25:20.219152  286560 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:25:20.274755  286560 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:25:20.274776  286560 crio.go:433] Images already preloaded, skipping extraction
	I0729 13:25:20.274883  286560 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:25:20.742720  286560 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:25:20.742736  286560 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:25:20.742745  286560 kubeadm.go:934] updating node { 192.168.61.100 8443 v1.30.3 crio true true} ...
	I0729 13:25:20.742884  286560 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-168661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:cert-expiration-168661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:25:20.742966  286560 ssh_runner.go:195] Run: crio config
	I0729 13:25:21.247159  286560 cni.go:84] Creating CNI manager for ""
	I0729 13:25:21.247175  286560 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:25:21.247188  286560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:25:21.247214  286560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.100 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-168661 NodeName:cert-expiration-168661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:25:21.247371  286560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-168661"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:25:21.247430  286560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:25:21.292372  286560 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:25:21.292443  286560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:25:21.326730  286560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0729 13:25:21.411682  286560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:25:21.535346  286560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0729 13:25:21.673609  286560 ssh_runner.go:195] Run: grep 192.168.61.100	control-plane.minikube.internal$ /etc/hosts
	I0729 13:25:20.632356  286026 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:25:20.632373  286026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:25:20.632391  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHHostname
	I0729 13:25:20.632991  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:25:20.633718  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:23:47 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:25:20.633748  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:25:20.634234  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:25:20.634449  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:25:20.634595  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:25:20.634755  286026 sshutil.go:53] new ssh client: &{IP:192.168.50.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/id_rsa Username:docker}
	I0729 13:25:20.636131  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:25:20.636542  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ac:80", ip: ""} in network mk-kubernetes-upgrade-375555: {Iface:virbr2 ExpiryTime:2024-07-29 14:23:47 +0000 UTC Type:0 Mac:52:54:00:d2:ac:80 Iaid: IPaddr:192.168.50.118 Prefix:24 Hostname:kubernetes-upgrade-375555 Clientid:01:52:54:00:d2:ac:80}
	I0729 13:25:20.636567  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | domain kubernetes-upgrade-375555 has defined IP address 192.168.50.118 and MAC address 52:54:00:d2:ac:80 in network mk-kubernetes-upgrade-375555
	I0729 13:25:20.636751  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHPort
	I0729 13:25:20.637079  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHKeyPath
	I0729 13:25:20.637235  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .GetSSHUsername
	I0729 13:25:20.637354  286026 sshutil.go:53] new ssh client: &{IP:192.168.50.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/kubernetes-upgrade-375555/id_rsa Username:docker}
	I0729 13:25:20.934295  286026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:25:20.964730  286026 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:25:20.964935  286026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:25:20.987726  286026 api_server.go:72] duration metric: took 435.921187ms to wait for apiserver process to appear ...
	I0729 13:25:20.987759  286026 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:25:20.987800  286026 api_server.go:253] Checking apiserver healthz at https://192.168.50.118:8443/healthz ...
	I0729 13:25:20.998358  286026 api_server.go:279] https://192.168.50.118:8443/healthz returned 200:
	ok
	I0729 13:25:21.001509  286026 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:25:21.001543  286026 api_server.go:131] duration metric: took 13.764113ms to wait for apiserver health ...
	I0729 13:25:21.001560  286026 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:25:21.035642  286026 system_pods.go:59] 8 kube-system pods found
	I0729 13:25:21.035773  286026 system_pods.go:61] "coredns-5cfdc65f69-7stkm" [43e570b9-d2b4-4310-b7b5-4eab34adc337] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:25:21.035799  286026 system_pods.go:61] "coredns-5cfdc65f69-vvst8" [fbf7d002-4da9-4eb7-943b-29188a49469f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:25:21.035840  286026 system_pods.go:61] "etcd-kubernetes-upgrade-375555" [b60ac800-1884-475c-b8b8-b2c830447ba6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:25:21.035872  286026 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-375555" [d015f310-6c83-4cb4-a6ac-d25987be65b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:25:21.035917  286026 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-375555" [06bea577-0974-48e5-9817-82a06a65b879] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:25:21.036133  286026 system_pods.go:61] "kube-proxy-xlfdr" [edc46954-3475-4e5a-9778-51d9324372ae] Running
	I0729 13:25:21.036159  286026 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-375555" [a94e4507-e168-49a3-8a52-2d686e9155d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:25:21.036174  286026 system_pods.go:61] "storage-provisioner" [7218505f-dd3e-4af6-8cfa-3cb6396ae18b] Running
	I0729 13:25:21.036226  286026 system_pods.go:74] duration metric: took 34.657848ms to wait for pod list to return data ...
	I0729 13:25:21.036249  286026 kubeadm.go:582] duration metric: took 484.450983ms to wait for: map[apiserver:true system_pods:true]
	I0729 13:25:21.036293  286026 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:25:21.042918  286026 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:25:21.042949  286026 node_conditions.go:123] node cpu capacity is 2
	I0729 13:25:21.042961  286026 node_conditions.go:105] duration metric: took 6.648468ms to run NodePressure ...
	I0729 13:25:21.042976  286026 start.go:241] waiting for startup goroutines ...
	I0729 13:25:21.055056  286026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:25:21.055438  286026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:25:22.076985  286026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.021511029s)
	I0729 13:25:22.077085  286026 main.go:141] libmachine: Making call to close driver server
	I0729 13:25:22.077100  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .Close
	I0729 13:25:22.077466  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Closing plugin on server side
	I0729 13:25:22.077506  286026 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:25:22.077563  286026 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:25:22.077582  286026 main.go:141] libmachine: Making call to close driver server
	I0729 13:25:22.077635  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .Close
	I0729 13:25:22.077895  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Closing plugin on server side
	I0729 13:25:22.078047  286026 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:25:22.078061  286026 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:25:22.079631  286026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.024516022s)
	I0729 13:25:22.079728  286026 main.go:141] libmachine: Making call to close driver server
	I0729 13:25:22.079796  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .Close
	I0729 13:25:22.080194  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Closing plugin on server side
	I0729 13:25:22.081706  286026 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:25:22.081739  286026 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:25:22.081749  286026 main.go:141] libmachine: Making call to close driver server
	I0729 13:25:22.081800  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .Close
	I0729 13:25:22.082100  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Closing plugin on server side
	I0729 13:25:22.082151  286026 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:25:22.082193  286026 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:25:22.090896  286026 main.go:141] libmachine: Making call to close driver server
	I0729 13:25:22.090917  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) Calling .Close
	I0729 13:25:22.091263  286026 main.go:141] libmachine: (kubernetes-upgrade-375555) DBG | Closing plugin on server side
	I0729 13:25:22.091311  286026 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:25:22.091335  286026 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:25:22.093051  286026 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 13:25:22.094259  286026 addons.go:510] duration metric: took 1.542438479s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 13:25:22.094299  286026 start.go:246] waiting for cluster config update ...
	I0729 13:25:22.094314  286026 start.go:255] writing updated cluster config ...
	I0729 13:25:22.094612  286026 ssh_runner.go:195] Run: rm -f paused
	I0729 13:25:22.154630  286026 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 13:25:22.156103  286026 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-375555" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.117391933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259523117366598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b75f49f4-5fc2-43db-a2ea-5532fc37149e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.117949046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ff547bf-416f-4b9c-ae40-101031a3b760 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.118017585Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ff547bf-416f-4b9c-ae40-101031a3b760 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.118355099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e80593c1146114bd86311ea91c923185639f48efc8c4ad7c9220fd2e0d6b171,PodSandboxId:64ffa3c48119b3ae4c2b35cc743f3b68eb77aa45e0bdfa42d624009c66a90437,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259518748292461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7stkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e570b9-d2b4-4310-b7b5-4eab34adc337,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653a1b23e61dd9406784d8855266998d8319614fbc784e49868b46e1c8fda5f,PodSandboxId:b3514530644724d041dea925ed228e5539f0e3ce8d6a95cf7f8d98a3bbbc4b32,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259518800634394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vvst8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fbf7d002-4da9-4eb7-943b-29188a49469f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c6228cd6aa66901e54dc33b48a4ca8ae8dc5a3bc62f909033d101f51ba2168,PodSandboxId:4bc93ed605cc736527c5f20f020ad6c9251285dec32dfe0bda27d53a76e56b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722259518794207776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7218505f-dd3e-4af6-8cfa-3cb6396ae18b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c08c83475f89640de808ce080f8679bbb8c0facdd6f70490c2d540c280bb4a,PodSandboxId:1011f7079b54ec0f147b1ebb47990dba753f31ac7fbed62808120ea38cf694d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAI
NER_RUNNING,CreatedAt:1722259515949737301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca72430428739b20b9bb87ba0194b234,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b780306a62cbc1cb47acb6c2e2b120a5ee8f26cafe9e84dd4c051719c8c6854b,PodSandboxId:3f1c45e9ba80416ecc6e019c0500054c4025c3bfce6a006c8dda866b1a9006a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938
,State:CONTAINER_RUNNING,CreatedAt:1722259515937863519,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7f19072e04bedd652e314a60c7e517,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73a6d673692b26bbb8aec176e6dda065266b9076496515b2b8ac1cef3c2aa50,PodSandboxId:c9253881445b29e5caee09e554bd157777fdaa40af3d180ebaff2d0fcf19149f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CO
NTAINER_RUNNING,CreatedAt:1722259512265660628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlfdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc46954-3475-4e5a-9778-51d9324372ae,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2697fbe4723922fcf8873c80920925111bd9e67d4d1f68bc8afa618ebf4b8d7,PodSandboxId:4bc93ed605cc736527c5f20f020ad6c9251285dec32dfe0bda27d53a76e56b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt
:1722259512242131249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7218505f-dd3e-4af6-8cfa-3cb6396ae18b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9293bc3c4ce7500e45265bbfa37501dd7ae066cb76a00d773550afc7c991f411,PodSandboxId:3050660f724b9af278643bb7cb84bbfeeb62311bbf5bb13fea5bfdc85e771a47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722259506657710
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dd47247145b2e593a944aa8f29ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6b19ec85ca1d311851432f70dbd4a7abe684508ac9c2692b2012d67a4cfa3c,PodSandboxId:bfbd9816e43d40d6cdf33e7173bd28834462787e9eb41617feee5b1527ab9870,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722259506589218252,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0091ca1634c89b1411b250eca142b304,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f25bdf2f04fbf05522d596270e80b51812623ca183a4a16bb809e168e383e3,PodSandboxId:64ffa3c48119b3ae4c2b35cc743f3b68eb77aa45e0bdfa42d624009c66a90437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259498621944748,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7stkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e570b9-d2b4-4310-b7b5-4eab34adc337,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abdd2a5a06c3f3c3ced9e84743822a33b63573c85cbfd8150aab7bc9d8b91279,PodSandboxId:b3514530644724d041dea925ed228e5539f0e3ce8d6a95cf7f8d98a3bbbc4b32,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259498577487707,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vvst8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf7d002-4da9-4eb7-943b-29188a49469f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c2b5a4c075a1373ca56f4827e5877f2bbe386b26497dac3b2dc9985c32f0af,PodSandboxId:f1a538d2024a7b08740c3fff95869d06e07b45d170ba645d301ea088d6fe
23af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722259495207917487,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dd47247145b2e593a944aa8f29ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc6c7cec77398f63eaf7636abf0c75e5dcc70e73d8bd6eda59136e1b171863d,PodSandboxId:86912256d48b883d2f45c69ce6252cd98e4a7ea758f3b315f3432a6cfc4778e7,M
etadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722259495146316902,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0091ca1634c89b1411b250eca142b304,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c356e65e7361564cf9d9fd523277c45a9528ee47cf03b31567437176399f68e,PodSandboxId:f652a1f13420d8de0e6903613eb129232f1ceaa8adf7d7052d48e7c09c4f79d3,Metadata:&ContainerMetadata{Name:kube-
controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722259495186355433,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca72430428739b20b9bb87ba0194b234,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7b1fc0cdaf90cc08bd80fc4612f3c63af52e86fd2afdf88238012e25b92bb9,PodSandboxId:bf50f3505d06b91f983270205aca8655de08af08e511a0f11c4e400c20e9fcd3,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722259495026434492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7f19072e04bedd652e314a60c7e517,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfd8e13fa66410cbfec1587eb036427aefe684e7fe3e6e4ae455085b21487af,PodSandboxId:41819b336cb38b9572c0b4116d038d05e0a9d509ea4b2d053fb5e70a7dff7288,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722259494975325254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlfdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc46954-3475-4e5a-9778-51d9324372ae,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ff547bf-416f-4b9c-ae40-101031a3b760 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.166158325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8275040a-2d84-44db-a0e0-dff4ade4b1ed name=/runtime.v1.RuntimeService/Version
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.166232763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8275040a-2d84-44db-a0e0-dff4ade4b1ed name=/runtime.v1.RuntimeService/Version
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.167176870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fff9df3-fb59-42dc-b697-400f50686bed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.167658156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259523167627032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fff9df3-fb59-42dc-b697-400f50686bed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.168125050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d1629d4-3627-48c2-a75a-c924e5293cd5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.168196640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d1629d4-3627-48c2-a75a-c924e5293cd5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.168610054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e80593c1146114bd86311ea91c923185639f48efc8c4ad7c9220fd2e0d6b171,PodSandboxId:64ffa3c48119b3ae4c2b35cc743f3b68eb77aa45e0bdfa42d624009c66a90437,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259518748292461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7stkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e570b9-d2b4-4310-b7b5-4eab34adc337,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653a1b23e61dd9406784d8855266998d8319614fbc784e49868b46e1c8fda5f,PodSandboxId:b3514530644724d041dea925ed228e5539f0e3ce8d6a95cf7f8d98a3bbbc4b32,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259518800634394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vvst8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fbf7d002-4da9-4eb7-943b-29188a49469f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c6228cd6aa66901e54dc33b48a4ca8ae8dc5a3bc62f909033d101f51ba2168,PodSandboxId:4bc93ed605cc736527c5f20f020ad6c9251285dec32dfe0bda27d53a76e56b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722259518794207776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7218505f-dd3e-4af6-8cfa-3cb6396ae18b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c08c83475f89640de808ce080f8679bbb8c0facdd6f70490c2d540c280bb4a,PodSandboxId:1011f7079b54ec0f147b1ebb47990dba753f31ac7fbed62808120ea38cf694d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAI
NER_RUNNING,CreatedAt:1722259515949737301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca72430428739b20b9bb87ba0194b234,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b780306a62cbc1cb47acb6c2e2b120a5ee8f26cafe9e84dd4c051719c8c6854b,PodSandboxId:3f1c45e9ba80416ecc6e019c0500054c4025c3bfce6a006c8dda866b1a9006a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938
,State:CONTAINER_RUNNING,CreatedAt:1722259515937863519,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7f19072e04bedd652e314a60c7e517,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73a6d673692b26bbb8aec176e6dda065266b9076496515b2b8ac1cef3c2aa50,PodSandboxId:c9253881445b29e5caee09e554bd157777fdaa40af3d180ebaff2d0fcf19149f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CO
NTAINER_RUNNING,CreatedAt:1722259512265660628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlfdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc46954-3475-4e5a-9778-51d9324372ae,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2697fbe4723922fcf8873c80920925111bd9e67d4d1f68bc8afa618ebf4b8d7,PodSandboxId:4bc93ed605cc736527c5f20f020ad6c9251285dec32dfe0bda27d53a76e56b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt
:1722259512242131249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7218505f-dd3e-4af6-8cfa-3cb6396ae18b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9293bc3c4ce7500e45265bbfa37501dd7ae066cb76a00d773550afc7c991f411,PodSandboxId:3050660f724b9af278643bb7cb84bbfeeb62311bbf5bb13fea5bfdc85e771a47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722259506657710
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dd47247145b2e593a944aa8f29ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6b19ec85ca1d311851432f70dbd4a7abe684508ac9c2692b2012d67a4cfa3c,PodSandboxId:bfbd9816e43d40d6cdf33e7173bd28834462787e9eb41617feee5b1527ab9870,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722259506589218252,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0091ca1634c89b1411b250eca142b304,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f25bdf2f04fbf05522d596270e80b51812623ca183a4a16bb809e168e383e3,PodSandboxId:64ffa3c48119b3ae4c2b35cc743f3b68eb77aa45e0bdfa42d624009c66a90437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259498621944748,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7stkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e570b9-d2b4-4310-b7b5-4eab34adc337,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abdd2a5a06c3f3c3ced9e84743822a33b63573c85cbfd8150aab7bc9d8b91279,PodSandboxId:b3514530644724d041dea925ed228e5539f0e3ce8d6a95cf7f8d98a3bbbc4b32,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259498577487707,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vvst8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf7d002-4da9-4eb7-943b-29188a49469f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c2b5a4c075a1373ca56f4827e5877f2bbe386b26497dac3b2dc9985c32f0af,PodSandboxId:f1a538d2024a7b08740c3fff95869d06e07b45d170ba645d301ea088d6fe
23af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722259495207917487,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dd47247145b2e593a944aa8f29ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc6c7cec77398f63eaf7636abf0c75e5dcc70e73d8bd6eda59136e1b171863d,PodSandboxId:86912256d48b883d2f45c69ce6252cd98e4a7ea758f3b315f3432a6cfc4778e7,M
etadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722259495146316902,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0091ca1634c89b1411b250eca142b304,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c356e65e7361564cf9d9fd523277c45a9528ee47cf03b31567437176399f68e,PodSandboxId:f652a1f13420d8de0e6903613eb129232f1ceaa8adf7d7052d48e7c09c4f79d3,Metadata:&ContainerMetadata{Name:kube-
controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722259495186355433,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca72430428739b20b9bb87ba0194b234,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7b1fc0cdaf90cc08bd80fc4612f3c63af52e86fd2afdf88238012e25b92bb9,PodSandboxId:bf50f3505d06b91f983270205aca8655de08af08e511a0f11c4e400c20e9fcd3,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722259495026434492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7f19072e04bedd652e314a60c7e517,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfd8e13fa66410cbfec1587eb036427aefe684e7fe3e6e4ae455085b21487af,PodSandboxId:41819b336cb38b9572c0b4116d038d05e0a9d509ea4b2d053fb5e70a7dff7288,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722259494975325254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlfdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc46954-3475-4e5a-9778-51d9324372ae,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d1629d4-3627-48c2-a75a-c924e5293cd5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.220687890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9a9e49d-c084-490a-878a-7dc7fa040ec7 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.220796391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9a9e49d-c084-490a-878a-7dc7fa040ec7 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.229529447Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a063a56-4380-4991-9b43-7120742e45f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.230050642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259523230018576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a063a56-4380-4991-9b43-7120742e45f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.230741297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3fc1ae3-f3bd-49c8-a6a9-156cc365ad12 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.230860125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3fc1ae3-f3bd-49c8-a6a9-156cc365ad12 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.231326969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e80593c1146114bd86311ea91c923185639f48efc8c4ad7c9220fd2e0d6b171,PodSandboxId:64ffa3c48119b3ae4c2b35cc743f3b68eb77aa45e0bdfa42d624009c66a90437,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259518748292461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7stkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e570b9-d2b4-4310-b7b5-4eab34adc337,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653a1b23e61dd9406784d8855266998d8319614fbc784e49868b46e1c8fda5f,PodSandboxId:b3514530644724d041dea925ed228e5539f0e3ce8d6a95cf7f8d98a3bbbc4b32,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259518800634394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vvst8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fbf7d002-4da9-4eb7-943b-29188a49469f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c6228cd6aa66901e54dc33b48a4ca8ae8dc5a3bc62f909033d101f51ba2168,PodSandboxId:4bc93ed605cc736527c5f20f020ad6c9251285dec32dfe0bda27d53a76e56b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722259518794207776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7218505f-dd3e-4af6-8cfa-3cb6396ae18b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c08c83475f89640de808ce080f8679bbb8c0facdd6f70490c2d540c280bb4a,PodSandboxId:1011f7079b54ec0f147b1ebb47990dba753f31ac7fbed62808120ea38cf694d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAI
NER_RUNNING,CreatedAt:1722259515949737301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca72430428739b20b9bb87ba0194b234,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b780306a62cbc1cb47acb6c2e2b120a5ee8f26cafe9e84dd4c051719c8c6854b,PodSandboxId:3f1c45e9ba80416ecc6e019c0500054c4025c3bfce6a006c8dda866b1a9006a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938
,State:CONTAINER_RUNNING,CreatedAt:1722259515937863519,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7f19072e04bedd652e314a60c7e517,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73a6d673692b26bbb8aec176e6dda065266b9076496515b2b8ac1cef3c2aa50,PodSandboxId:c9253881445b29e5caee09e554bd157777fdaa40af3d180ebaff2d0fcf19149f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CO
NTAINER_RUNNING,CreatedAt:1722259512265660628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlfdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc46954-3475-4e5a-9778-51d9324372ae,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2697fbe4723922fcf8873c80920925111bd9e67d4d1f68bc8afa618ebf4b8d7,PodSandboxId:4bc93ed605cc736527c5f20f020ad6c9251285dec32dfe0bda27d53a76e56b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt
:1722259512242131249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7218505f-dd3e-4af6-8cfa-3cb6396ae18b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9293bc3c4ce7500e45265bbfa37501dd7ae066cb76a00d773550afc7c991f411,PodSandboxId:3050660f724b9af278643bb7cb84bbfeeb62311bbf5bb13fea5bfdc85e771a47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722259506657710
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dd47247145b2e593a944aa8f29ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6b19ec85ca1d311851432f70dbd4a7abe684508ac9c2692b2012d67a4cfa3c,PodSandboxId:bfbd9816e43d40d6cdf33e7173bd28834462787e9eb41617feee5b1527ab9870,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722259506589218252,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0091ca1634c89b1411b250eca142b304,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f25bdf2f04fbf05522d596270e80b51812623ca183a4a16bb809e168e383e3,PodSandboxId:64ffa3c48119b3ae4c2b35cc743f3b68eb77aa45e0bdfa42d624009c66a90437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259498621944748,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7stkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e570b9-d2b4-4310-b7b5-4eab34adc337,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abdd2a5a06c3f3c3ced9e84743822a33b63573c85cbfd8150aab7bc9d8b91279,PodSandboxId:b3514530644724d041dea925ed228e5539f0e3ce8d6a95cf7f8d98a3bbbc4b32,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259498577487707,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vvst8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf7d002-4da9-4eb7-943b-29188a49469f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c2b5a4c075a1373ca56f4827e5877f2bbe386b26497dac3b2dc9985c32f0af,PodSandboxId:f1a538d2024a7b08740c3fff95869d06e07b45d170ba645d301ea088d6fe
23af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722259495207917487,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dd47247145b2e593a944aa8f29ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc6c7cec77398f63eaf7636abf0c75e5dcc70e73d8bd6eda59136e1b171863d,PodSandboxId:86912256d48b883d2f45c69ce6252cd98e4a7ea758f3b315f3432a6cfc4778e7,M
etadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722259495146316902,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0091ca1634c89b1411b250eca142b304,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c356e65e7361564cf9d9fd523277c45a9528ee47cf03b31567437176399f68e,PodSandboxId:f652a1f13420d8de0e6903613eb129232f1ceaa8adf7d7052d48e7c09c4f79d3,Metadata:&ContainerMetadata{Name:kube-
controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722259495186355433,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca72430428739b20b9bb87ba0194b234,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7b1fc0cdaf90cc08bd80fc4612f3c63af52e86fd2afdf88238012e25b92bb9,PodSandboxId:bf50f3505d06b91f983270205aca8655de08af08e511a0f11c4e400c20e9fcd3,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722259495026434492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7f19072e04bedd652e314a60c7e517,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfd8e13fa66410cbfec1587eb036427aefe684e7fe3e6e4ae455085b21487af,PodSandboxId:41819b336cb38b9572c0b4116d038d05e0a9d509ea4b2d053fb5e70a7dff7288,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722259494975325254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlfdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc46954-3475-4e5a-9778-51d9324372ae,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3fc1ae3-f3bd-49c8-a6a9-156cc365ad12 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.270341849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c96e8d14-20dc-4325-b52b-bc40410f25f3 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.270455270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c96e8d14-20dc-4325-b52b-bc40410f25f3 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.271970421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55b2c8b0-0b1b-4abb-8c90-028ea8ffe44b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.272333315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259523272308141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55b2c8b0-0b1b-4abb-8c90-028ea8ffe44b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.273106228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9e5803c-7db0-462c-bf95-9e3d25d8b6f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.273164354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9e5803c-7db0-462c-bf95-9e3d25d8b6f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:25:23 kubernetes-upgrade-375555 crio[3020]: time="2024-07-29 13:25:23.273478087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e80593c1146114bd86311ea91c923185639f48efc8c4ad7c9220fd2e0d6b171,PodSandboxId:64ffa3c48119b3ae4c2b35cc743f3b68eb77aa45e0bdfa42d624009c66a90437,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259518748292461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7stkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e570b9-d2b4-4310-b7b5-4eab34adc337,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653a1b23e61dd9406784d8855266998d8319614fbc784e49868b46e1c8fda5f,PodSandboxId:b3514530644724d041dea925ed228e5539f0e3ce8d6a95cf7f8d98a3bbbc4b32,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259518800634394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vvst8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fbf7d002-4da9-4eb7-943b-29188a49469f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c6228cd6aa66901e54dc33b48a4ca8ae8dc5a3bc62f909033d101f51ba2168,PodSandboxId:4bc93ed605cc736527c5f20f020ad6c9251285dec32dfe0bda27d53a76e56b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722259518794207776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7218505f-dd3e-4af6-8cfa-3cb6396ae18b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c08c83475f89640de808ce080f8679bbb8c0facdd6f70490c2d540c280bb4a,PodSandboxId:1011f7079b54ec0f147b1ebb47990dba753f31ac7fbed62808120ea38cf694d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAI
NER_RUNNING,CreatedAt:1722259515949737301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca72430428739b20b9bb87ba0194b234,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b780306a62cbc1cb47acb6c2e2b120a5ee8f26cafe9e84dd4c051719c8c6854b,PodSandboxId:3f1c45e9ba80416ecc6e019c0500054c4025c3bfce6a006c8dda866b1a9006a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938
,State:CONTAINER_RUNNING,CreatedAt:1722259515937863519,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7f19072e04bedd652e314a60c7e517,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73a6d673692b26bbb8aec176e6dda065266b9076496515b2b8ac1cef3c2aa50,PodSandboxId:c9253881445b29e5caee09e554bd157777fdaa40af3d180ebaff2d0fcf19149f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CO
NTAINER_RUNNING,CreatedAt:1722259512265660628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlfdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc46954-3475-4e5a-9778-51d9324372ae,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2697fbe4723922fcf8873c80920925111bd9e67d4d1f68bc8afa618ebf4b8d7,PodSandboxId:4bc93ed605cc736527c5f20f020ad6c9251285dec32dfe0bda27d53a76e56b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt
:1722259512242131249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7218505f-dd3e-4af6-8cfa-3cb6396ae18b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9293bc3c4ce7500e45265bbfa37501dd7ae066cb76a00d773550afc7c991f411,PodSandboxId:3050660f724b9af278643bb7cb84bbfeeb62311bbf5bb13fea5bfdc85e771a47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722259506657710
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dd47247145b2e593a944aa8f29ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6b19ec85ca1d311851432f70dbd4a7abe684508ac9c2692b2012d67a4cfa3c,PodSandboxId:bfbd9816e43d40d6cdf33e7173bd28834462787e9eb41617feee5b1527ab9870,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722259506589218252,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0091ca1634c89b1411b250eca142b304,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f25bdf2f04fbf05522d596270e80b51812623ca183a4a16bb809e168e383e3,PodSandboxId:64ffa3c48119b3ae4c2b35cc743f3b68eb77aa45e0bdfa42d624009c66a90437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259498621944748,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7stkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e570b9-d2b4-4310-b7b5-4eab34adc337,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abdd2a5a06c3f3c3ced9e84743822a33b63573c85cbfd8150aab7bc9d8b91279,PodSandboxId:b3514530644724d041dea925ed228e5539f0e3ce8d6a95cf7f8d98a3bbbc4b32,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259498577487707,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vvst8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbf7d002-4da9-4eb7-943b-29188a49469f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c2b5a4c075a1373ca56f4827e5877f2bbe386b26497dac3b2dc9985c32f0af,PodSandboxId:f1a538d2024a7b08740c3fff95869d06e07b45d170ba645d301ea088d6fe
23af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722259495207917487,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dd47247145b2e593a944aa8f29ad5b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc6c7cec77398f63eaf7636abf0c75e5dcc70e73d8bd6eda59136e1b171863d,PodSandboxId:86912256d48b883d2f45c69ce6252cd98e4a7ea758f3b315f3432a6cfc4778e7,M
etadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722259495146316902,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0091ca1634c89b1411b250eca142b304,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c356e65e7361564cf9d9fd523277c45a9528ee47cf03b31567437176399f68e,PodSandboxId:f652a1f13420d8de0e6903613eb129232f1ceaa8adf7d7052d48e7c09c4f79d3,Metadata:&ContainerMetadata{Name:kube-
controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722259495186355433,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca72430428739b20b9bb87ba0194b234,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7b1fc0cdaf90cc08bd80fc4612f3c63af52e86fd2afdf88238012e25b92bb9,PodSandboxId:bf50f3505d06b91f983270205aca8655de08af08e511a0f11c4e400c20e9fcd3,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722259495026434492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-375555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7f19072e04bedd652e314a60c7e517,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfd8e13fa66410cbfec1587eb036427aefe684e7fe3e6e4ae455085b21487af,PodSandboxId:41819b336cb38b9572c0b4116d038d05e0a9d509ea4b2d053fb5e70a7dff7288,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722259494975325254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlfdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc46954-3475-4e5a-9778-51d9324372ae,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9e5803c-7db0-462c-bf95-9e3d25d8b6f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f653a1b23e61d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   2                   b351453064472       coredns-5cfdc65f69-vvst8
	17c6228cd6aa6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       3                   4bc93ed605cc7       storage-provisioner
	8e80593c11461       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   2                   64ffa3c48119b       coredns-5cfdc65f69-7stkm
	59c08c83475f8       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   2                   1011f7079b54e       kube-controller-manager-kubernetes-upgrade-375555
	b780306a62cbc       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            2                   3f1c45e9ba804       kube-apiserver-kubernetes-upgrade-375555
	d73a6d673692b       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   11 seconds ago      Running             kube-proxy                2                   c9253881445b2       kube-proxy-xlfdr
	f2697fbe47239       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Exited              storage-provisioner       2                   4bc93ed605cc7       storage-provisioner
	9293bc3c4ce75       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   16 seconds ago      Running             kube-scheduler            2                   3050660f724b9       kube-scheduler-kubernetes-upgrade-375555
	ab6b19ec85ca1       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   16 seconds ago      Running             etcd                      2                   bfbd9816e43d4       etcd-kubernetes-upgrade-375555
	c0f25bdf2f04f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   64ffa3c48119b       coredns-5cfdc65f69-7stkm
	abdd2a5a06c3f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   b351453064472       coredns-5cfdc65f69-vvst8
	39c2b5a4c075a       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   28 seconds ago      Exited              kube-scheduler            1                   f1a538d2024a7       kube-scheduler-kubernetes-upgrade-375555
	2c356e65e7361       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   28 seconds ago      Exited              kube-controller-manager   1                   f652a1f13420d       kube-controller-manager-kubernetes-upgrade-375555
	bbc6c7cec7739       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   28 seconds ago      Exited              etcd                      1                   86912256d48b8       etcd-kubernetes-upgrade-375555
	1a7b1fc0cdaf9       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   28 seconds ago      Exited              kube-apiserver            1                   bf50f3505d06b       kube-apiserver-kubernetes-upgrade-375555
	2dfd8e13fa664       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   28 seconds ago      Exited              kube-proxy                1                   41819b336cb38       kube-proxy-xlfdr
	
	
	==> coredns [8e80593c1146114bd86311ea91c923185639f48efc8c4ad7c9220fd2e0d6b171] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [abdd2a5a06c3f3c3ced9e84743822a33b63573c85cbfd8150aab7bc9d8b91279] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c0f25bdf2f04fbf05522d596270e80b51812623ca183a4a16bb809e168e383e3] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f653a1b23e61dd9406784d8855266998d8319614fbc784e49868b46e1c8fda5f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-375555
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-375555
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:24:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-375555
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:25:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:25:18 +0000   Mon, 29 Jul 2024 13:24:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:25:18 +0000   Mon, 29 Jul 2024 13:24:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:25:18 +0000   Mon, 29 Jul 2024 13:24:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:25:18 +0000   Mon, 29 Jul 2024 13:24:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.118
	  Hostname:    kubernetes-upgrade-375555
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008043345e294e4287b28e5b3a208bb2
	  System UUID:                00804334-5e29-4e42-87b2-8e5b3a208bb2
	  Boot ID:                    48d782f8-eec2-4f98-b7cf-e9ec5ed3e3ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-7stkm                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     67s
	  kube-system                 coredns-5cfdc65f69-vvst8                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     67s
	  kube-system                 etcd-kubernetes-upgrade-375555                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 kube-apiserver-kubernetes-upgrade-375555             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-375555    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-xlfdr                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-kubernetes-upgrade-375555             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)  kubelet          Node kubernetes-upgrade-375555 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)  kubelet          Node kubernetes-upgrade-375555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x7 over 80s)  kubelet          Node kubernetes-upgrade-375555 status is now: NodeHasSufficientPID
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           69s                node-controller  Node kubernetes-upgrade-375555 event: Registered Node kubernetes-upgrade-375555 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-375555 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-375555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-375555 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-375555 event: Registered Node kubernetes-upgrade-375555 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.626767] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.061877] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060776] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.167533] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.152705] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.271986] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[Jul29 13:24] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +2.194876] systemd-fstab-generator[861]: Ignoring "noauto" option for root device
	[  +0.058863] kauditd_printk_skb: 158 callbacks suppressed
	[ +13.366919] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[  +0.089326] kauditd_printk_skb: 69 callbacks suppressed
	[ +37.596321] systemd-fstab-generator[2200]: Ignoring "noauto" option for root device
	[  +0.095783] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.059797] systemd-fstab-generator[2212]: Ignoring "noauto" option for root device
	[  +0.178318] systemd-fstab-generator[2226]: Ignoring "noauto" option for root device
	[  +0.220129] systemd-fstab-generator[2264]: Ignoring "noauto" option for root device
	[  +1.373936] systemd-fstab-generator[2870]: Ignoring "noauto" option for root device
	[  +1.390836] systemd-fstab-generator[3205]: Ignoring "noauto" option for root device
	[Jul29 13:25] kauditd_printk_skb: 300 callbacks suppressed
	[  +5.831939] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.103885] systemd-fstab-generator[4156]: Ignoring "noauto" option for root device
	[  +0.098362] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.474083] systemd-fstab-generator[4523]: Ignoring "noauto" option for root device
	[  +0.129705] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [ab6b19ec85ca1d311851432f70dbd4a7abe684508ac9c2692b2012d67a4cfa3c] <==
	{"level":"info","ts":"2024-07-29T13:25:06.800859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e636b662aeb5f06e switched to configuration voters=(16588646812420010094)"}
	{"level":"info","ts":"2024-07-29T13:25:06.80305Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ac3576c109105283","local-member-id":"e636b662aeb5f06e","added-peer-id":"e636b662aeb5f06e","added-peer-peer-urls":["https://192.168.50.118:2380"]}
	{"level":"info","ts":"2024-07-29T13:25:06.803369Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ac3576c109105283","local-member-id":"e636b662aeb5f06e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:25:06.80345Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:25:06.804486Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T13:25:06.804722Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e636b662aeb5f06e","initial-advertise-peer-urls":["https://192.168.50.118:2380"],"listen-peer-urls":["https://192.168.50.118:2380"],"advertise-client-urls":["https://192.168.50.118:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.118:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:25:06.804764Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:25:06.804821Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.118:2380"}
	{"level":"info","ts":"2024-07-29T13:25:06.804827Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.118:2380"}
	{"level":"info","ts":"2024-07-29T13:25:07.975407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e636b662aeb5f06e is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T13:25:07.975488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e636b662aeb5f06e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T13:25:07.975505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e636b662aeb5f06e received MsgPreVoteResp from e636b662aeb5f06e at term 2"}
	{"level":"info","ts":"2024-07-29T13:25:07.975517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e636b662aeb5f06e became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T13:25:07.97561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e636b662aeb5f06e received MsgVoteResp from e636b662aeb5f06e at term 3"}
	{"level":"info","ts":"2024-07-29T13:25:07.975621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e636b662aeb5f06e became leader at term 3"}
	{"level":"info","ts":"2024-07-29T13:25:07.975629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e636b662aeb5f06e elected leader e636b662aeb5f06e at term 3"}
	{"level":"info","ts":"2024-07-29T13:25:07.97714Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e636b662aeb5f06e","local-member-attributes":"{Name:kubernetes-upgrade-375555 ClientURLs:[https://192.168.50.118:2379]}","request-path":"/0/members/e636b662aeb5f06e/attributes","cluster-id":"ac3576c109105283","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:25:07.977167Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:25:07.977193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:25:07.978955Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T13:25:07.980106Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.118:2379"}
	{"level":"info","ts":"2024-07-29T13:25:07.981025Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T13:25:07.98224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T13:25:07.986628Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:25:07.986682Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [bbc6c7cec77398f63eaf7636abf0c75e5dcc70e73d8bd6eda59136e1b171863d] <==
	{"level":"info","ts":"2024-07-29T13:24:55.822001Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T13:24:55.876294Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ac3576c109105283","local-member-id":"e636b662aeb5f06e","commit-index":421}
	{"level":"info","ts":"2024-07-29T13:24:55.876381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e636b662aeb5f06e switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T13:24:55.876433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e636b662aeb5f06e became follower at term 2"}
	{"level":"info","ts":"2024-07-29T13:24:55.876452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e636b662aeb5f06e [peers: [], term: 2, commit: 421, applied: 0, lastindex: 421, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T13:24:55.910805Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T13:24:56.04215Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":408}
	{"level":"info","ts":"2024-07-29T13:24:56.066687Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T13:24:56.073076Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e636b662aeb5f06e","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:24:56.076085Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e636b662aeb5f06e"}
	{"level":"info","ts":"2024-07-29T13:24:56.076262Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"e636b662aeb5f06e","local-server-version":"3.5.14","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T13:24:56.07688Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T13:24:56.079404Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T13:24:56.08293Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e636b662aeb5f06e","initial-advertise-peer-urls":["https://192.168.50.118:2380"],"listen-peer-urls":["https://192.168.50.118:2380"],"advertise-client-urls":["https://192.168.50.118:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.118:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:24:56.083292Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:24:56.081842Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T13:24:56.082008Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:24:56.083522Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:24:56.083717Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:24:56.082392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e636b662aeb5f06e switched to configuration voters=(16588646812420010094)"}
	{"level":"info","ts":"2024-07-29T13:24:56.087824Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ac3576c109105283","local-member-id":"e636b662aeb5f06e","added-peer-id":"e636b662aeb5f06e","added-peer-peer-urls":["https://192.168.50.118:2380"]}
	{"level":"info","ts":"2024-07-29T13:24:56.088309Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ac3576c109105283","local-member-id":"e636b662aeb5f06e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:24:56.093041Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:24:56.082519Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.118:2380"}
	{"level":"info","ts":"2024-07-29T13:24:56.104494Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.118:2380"}
	
	
	==> kernel <==
	 13:25:23 up 1 min,  0 users,  load average: 0.55, 0.26, 0.10
	Linux kubernetes-upgrade-375555 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1a7b1fc0cdaf90cc08bd80fc4612f3c63af52e86fd2afdf88238012e25b92bb9] <==
	I0729 13:24:55.864728       1 options.go:228] external host was not specified, using 192.168.50.118
	I0729 13:24:55.875772       1 server.go:142] Version: v1.31.0-beta.0
	I0729 13:24:55.875830       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [b780306a62cbc1cb47acb6c2e2b120a5ee8f26cafe9e84dd4c051719c8c6854b] <==
	I0729 13:25:18.458357       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 13:25:18.461241       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:25:18.461324       1 policy_source.go:224] refreshing policies
	I0729 13:25:18.482082       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 13:25:18.482835       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 13:25:18.483660       1 aggregator.go:171] initial CRD sync complete...
	I0729 13:25:18.483733       1 autoregister_controller.go:144] Starting autoregister controller
	I0729 13:25:18.483767       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 13:25:18.483796       1 cache.go:39] Caches are synced for autoregister controller
	I0729 13:25:18.493260       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0729 13:25:18.493341       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0729 13:25:18.493464       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 13:25:18.495438       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 13:25:18.499371       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 13:25:18.519154       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 13:25:18.519587       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0729 13:25:18.547985       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 13:25:19.298018       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 13:25:20.373958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 13:25:20.397493       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 13:25:20.464808       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 13:25:20.505335       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 13:25:20.513645       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 13:25:22.908231       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 13:25:22.926023       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2c356e65e7361564cf9d9fd523277c45a9528ee47cf03b31567437176399f68e] <==
	
	
	==> kube-controller-manager [59c08c83475f89640de808ce080f8679bbb8c0facdd6f70490c2d540c280bb4a] <==
	I0729 13:25:22.822046       1 shared_informer.go:320] Caches are synced for job
	I0729 13:25:22.822088       1 shared_informer.go:320] Caches are synced for disruption
	I0729 13:25:22.822116       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 13:25:22.822158       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 13:25:22.822210       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-375555"
	I0729 13:25:22.823144       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 13:25:22.826941       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 13:25:22.832332       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 13:25:22.846652       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 13:25:22.855004       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 13:25:22.862967       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 13:25:22.870213       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 13:25:22.870651       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 13:25:22.871032       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 13:25:22.876037       1 shared_informer.go:320] Caches are synced for GC
	I0729 13:25:22.970365       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 13:25:22.978989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="108.280473ms"
	I0729 13:25:22.979130       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="54.766µs"
	I0729 13:25:23.004744       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 13:25:23.061223       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 13:25:23.072053       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 13:25:23.076485       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 13:25:23.132813       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 13:25:23.132901       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 13:25:23.145682       1 shared_informer.go:320] Caches are synced for resource quota
	
	
	==> kube-proxy [2dfd8e13fa66410cbfec1587eb036427aefe684e7fe3e6e4ae455085b21487af] <==
	
	
	==> kube-proxy [d73a6d673692b26bbb8aec176e6dda065266b9076496515b2b8ac1cef3c2aa50] <==
	E0729 13:25:12.492158       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 13:25:12.494788       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-375555\": dial tcp 192.168.50.118:8443: connect: connection refused"
	E0729 13:25:13.584445       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-375555\": dial tcp 192.168.50.118:8443: connect: connection refused"
	E0729 13:25:15.820167       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-375555\": dial tcp 192.168.50.118:8443: connect: connection refused"
	I0729 13:25:20.372959       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.118"]
	E0729 13:25:20.373189       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 13:25:20.432527       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 13:25:20.432647       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:25:20.432686       1 server_linux.go:170] "Using iptables Proxier"
	I0729 13:25:20.441528       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 13:25:20.442134       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 13:25:20.442401       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:25:20.445010       1 config.go:197] "Starting service config controller"
	I0729 13:25:20.445344       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:25:20.445485       1 config.go:104] "Starting endpoint slice config controller"
	I0729 13:25:20.445638       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:25:20.446468       1 config.go:326] "Starting node config controller"
	I0729 13:25:20.446525       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:25:20.553620       1 shared_informer.go:320] Caches are synced for node config
	I0729 13:25:20.553743       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:25:20.553763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [39c2b5a4c075a1373ca56f4827e5877f2bbe386b26497dac3b2dc9985c32f0af] <==
	
	
	==> kube-scheduler [9293bc3c4ce7500e45265bbfa37501dd7ae066cb76a00d773550afc7c991f411] <==
	E0729 13:25:14.779942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.50.118:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.50.118:8443: connect: connection refused" logger="UnhandledError"
	W0729 13:25:15.044301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.118:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.118:8443: connect: connection refused
	E0729 13:25:15.044341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.50.118:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.50.118:8443: connect: connection refused" logger="UnhandledError"
	W0729 13:25:15.136940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.50.118:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.118:8443: connect: connection refused
	E0729 13:25:15.136994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.50.118:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.50.118:8443: connect: connection refused" logger="UnhandledError"
	W0729 13:25:15.169211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.50.118:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.50.118:8443: connect: connection refused
	E0729 13:25:15.169263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.50.118:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.50.118:8443: connect: connection refused" logger="UnhandledError"
	W0729 13:25:15.254924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.50.118:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.118:8443: connect: connection refused
	E0729 13:25:15.254981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.50.118:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.50.118:8443: connect: connection refused" logger="UnhandledError"
	W0729 13:25:15.745006       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.50.118:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.50.118:8443: connect: connection refused
	E0729 13:25:15.745099       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.50.118:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.50.118:8443: connect: connection refused" logger="UnhandledError"
	W0729 13:25:15.824443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.50.118:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.50.118:8443: connect: connection refused
	E0729 13:25:15.824520       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.50.118:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.50.118:8443: connect: connection refused" logger="UnhandledError"
	W0729 13:25:16.124751       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.50.118:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.118:8443: connect: connection refused
	E0729 13:25:16.124811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.50.118:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.118:8443: connect: connection refused" logger="UnhandledError"
	W0729 13:25:18.375326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 13:25:18.378850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:25:18.379040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 13:25:18.379374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:25:18.379639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 13:25:18.380092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 13:25:18.380248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 13:25:18.375426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 13:25:18.380616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0729 13:25:18.380696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	
	
	==> kubelet <==
	Jul 29 13:25:15 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:15.719300    4163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca72430428739b20b9bb87ba0194b234-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-375555\" (UID: \"ca72430428739b20b9bb87ba0194b234\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-375555"
	Jul 29 13:25:15 kubernetes-upgrade-375555 kubelet[4163]: E0729 13:25:15.718982    4163 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.118:8443: connect: connection refused" node="kubernetes-upgrade-375555"
	Jul 29 13:25:15 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:15.720240    4163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca72430428739b20b9bb87ba0194b234-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-375555\" (UID: \"ca72430428739b20b9bb87ba0194b234\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-375555"
	Jul 29 13:25:15 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:15.720457    4163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca72430428739b20b9bb87ba0194b234-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-375555\" (UID: \"ca72430428739b20b9bb87ba0194b234\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-375555"
	Jul 29 13:25:15 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:15.720728    4163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58dd47247145b2e593a944aa8f29ad5b-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-375555\" (UID: \"58dd47247145b2e593a944aa8f29ad5b\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-375555"
	Jul 29 13:25:15 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:15.914834    4163 scope.go:117] "RemoveContainer" containerID="1a7b1fc0cdaf90cc08bd80fc4612f3c63af52e86fd2afdf88238012e25b92bb9"
	Jul 29 13:25:15 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:15.917716    4163 scope.go:117] "RemoveContainer" containerID="2c356e65e7361564cf9d9fd523277c45a9528ee47cf03b31567437176399f68e"
	Jul 29 13:25:16 kubernetes-upgrade-375555 kubelet[4163]: E0729 13:25:16.018216    4163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-375555?timeout=10s\": dial tcp 192.168.50.118:8443: connect: connection refused" interval="800ms"
	Jul 29 13:25:16 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:16.121695    4163 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-375555"
	Jul 29 13:25:16 kubernetes-upgrade-375555 kubelet[4163]: E0729 13:25:16.122647    4163 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.118:8443: connect: connection refused" node="kubernetes-upgrade-375555"
	Jul 29 13:25:16 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:16.924610    4163 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-375555"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.383848    4163 apiserver.go:52] "Watching apiserver"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.508759    4163 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.561352    4163 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-375555"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.561457    4163 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-375555"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.561496    4163 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.562948    4163 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.591921    4163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edc46954-3475-4e5a-9778-51d9324372ae-xtables-lock\") pod \"kube-proxy-xlfdr\" (UID: \"edc46954-3475-4e5a-9778-51d9324372ae\") " pod="kube-system/kube-proxy-xlfdr"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.592103    4163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edc46954-3475-4e5a-9778-51d9324372ae-lib-modules\") pod \"kube-proxy-xlfdr\" (UID: \"edc46954-3475-4e5a-9778-51d9324372ae\") " pod="kube-system/kube-proxy-xlfdr"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.592189    4163 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7218505f-dd3e-4af6-8cfa-3cb6396ae18b-tmp\") pod \"storage-provisioner\" (UID: \"7218505f-dd3e-4af6-8cfa-3cb6396ae18b\") " pod="kube-system/storage-provisioner"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: E0729 13:25:18.669519    4163 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-375555\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-375555"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.723875    4163 scope.go:117] "RemoveContainer" containerID="c0f25bdf2f04fbf05522d596270e80b51812623ca183a4a16bb809e168e383e3"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.725320    4163 scope.go:117] "RemoveContainer" containerID="abdd2a5a06c3f3c3ced9e84743822a33b63573c85cbfd8150aab7bc9d8b91279"
	Jul 29 13:25:18 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:18.730635    4163 scope.go:117] "RemoveContainer" containerID="f2697fbe4723922fcf8873c80920925111bd9e67d4d1f68bc8afa618ebf4b8d7"
	Jul 29 13:25:20 kubernetes-upgrade-375555 kubelet[4163]: I0729 13:25:20.642380    4163 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [17c6228cd6aa66901e54dc33b48a4ca8ae8dc5a3bc62f909033d101f51ba2168] <==
	I0729 13:25:18.981705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 13:25:19.006780       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 13:25:19.006881       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [f2697fbe4723922fcf8873c80920925111bd9e67d4d1f68bc8afa618ebf4b8d7] <==
	I0729 13:25:12.383626       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 13:25:12.385394       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-375555 -n kubernetes-upgrade-375555
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-375555 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-375555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-375555
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-375555: (1.184752127s)
--- FAIL: TestKubernetesUpgrade (445.92s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (74.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-220574 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-220574 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.262914602s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-220574] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-220574" primary control-plane node in "pause-220574" cluster
	* Updating the running kvm2 "pause-220574" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-220574" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:22:00.828274  284129 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:22:00.828388  284129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:22:00.828397  284129 out.go:304] Setting ErrFile to fd 2...
	I0729 13:22:00.828401  284129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:22:00.828573  284129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:22:00.829160  284129 out.go:298] Setting JSON to false
	I0729 13:22:00.830226  284129 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11064,"bootTime":1722248257,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:22:00.830304  284129 start.go:139] virtualization: kvm guest
	I0729 13:22:00.959720  284129 out.go:177] * [pause-220574] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:22:01.071472  284129 notify.go:220] Checking for updates...
	I0729 13:22:01.105795  284129 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:22:01.236001  284129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:22:01.373202  284129 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:22:01.507116  284129 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:22:01.635451  284129 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:22:01.766603  284129 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:22:01.827915  284129 config.go:182] Loaded profile config "pause-220574": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:22:01.828505  284129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:22:01.828561  284129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:22:01.845799  284129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I0729 13:22:01.846353  284129 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:22:01.847180  284129 main.go:141] libmachine: Using API Version  1
	I0729 13:22:01.847212  284129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:22:01.847755  284129 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:22:01.848042  284129 main.go:141] libmachine: (pause-220574) Calling .DriverName
	I0729 13:22:01.848357  284129 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:22:01.848693  284129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:22:01.848734  284129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:22:01.877652  284129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40821
	I0729 13:22:01.878266  284129 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:22:01.879034  284129 main.go:141] libmachine: Using API Version  1
	I0729 13:22:01.879073  284129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:22:01.879448  284129 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:22:01.879689  284129 main.go:141] libmachine: (pause-220574) Calling .DriverName
	I0729 13:22:01.951953  284129 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:22:01.994759  284129 start.go:297] selected driver: kvm2
	I0729 13:22:01.994790  284129 start.go:901] validating driver "kvm2" against &{Name:pause-220574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-220574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:22:01.995038  284129 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:22:01.995533  284129 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:22:01.995642  284129 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:22:02.012451  284129 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:22:02.013474  284129 cni.go:84] Creating CNI manager for ""
	I0729 13:22:02.013492  284129 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:22:02.013561  284129 start.go:340] cluster config:
	{Name:pause-220574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-220574 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:22:02.013749  284129 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:22:02.036169  284129 out.go:177] * Starting "pause-220574" primary control-plane node in "pause-220574" cluster
	I0729 13:22:02.056250  284129 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:22:02.056318  284129 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:22:02.056329  284129 cache.go:56] Caching tarball of preloaded images
	I0729 13:22:02.056460  284129 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:22:02.056474  284129 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:22:02.056622  284129 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/pause-220574/config.json ...
	I0729 13:22:02.115138  284129 start.go:360] acquireMachinesLock for pause-220574: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:22:18.909830  284129 start.go:364] duration metric: took 16.794646814s to acquireMachinesLock for "pause-220574"
	I0729 13:22:18.909892  284129 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:22:18.909904  284129 fix.go:54] fixHost starting: 
	I0729 13:22:18.910315  284129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:22:18.910368  284129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:22:18.930495  284129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38367
	I0729 13:22:18.930952  284129 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:22:18.931414  284129 main.go:141] libmachine: Using API Version  1
	I0729 13:22:18.931445  284129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:22:18.931794  284129 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:22:18.931983  284129 main.go:141] libmachine: (pause-220574) Calling .DriverName
	I0729 13:22:18.932110  284129 main.go:141] libmachine: (pause-220574) Calling .GetState
	I0729 13:22:18.933809  284129 fix.go:112] recreateIfNeeded on pause-220574: state=Running err=<nil>
	W0729 13:22:18.933841  284129 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:22:18.935844  284129 out.go:177] * Updating the running kvm2 "pause-220574" VM ...
	I0729 13:22:18.937122  284129 machine.go:94] provisionDockerMachine start ...
	I0729 13:22:18.937143  284129 main.go:141] libmachine: (pause-220574) Calling .DriverName
	I0729 13:22:18.937366  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHHostname
	I0729 13:22:18.939999  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:18.940448  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:18.940469  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:18.940686  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHPort
	I0729 13:22:18.940891  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:18.941035  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:18.941225  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHUsername
	I0729 13:22:18.941402  284129 main.go:141] libmachine: Using SSH client type: native
	I0729 13:22:18.941601  284129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0729 13:22:18.941612  284129 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:22:19.058426  284129 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-220574
	
	I0729 13:22:19.058470  284129 main.go:141] libmachine: (pause-220574) Calling .GetMachineName
	I0729 13:22:19.058779  284129 buildroot.go:166] provisioning hostname "pause-220574"
	I0729 13:22:19.058814  284129 main.go:141] libmachine: (pause-220574) Calling .GetMachineName
	I0729 13:22:19.059040  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHHostname
	I0729 13:22:19.062150  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.062541  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:19.062584  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.062759  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHPort
	I0729 13:22:19.062973  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:19.063173  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:19.063357  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHUsername
	I0729 13:22:19.063541  284129 main.go:141] libmachine: Using SSH client type: native
	I0729 13:22:19.063706  284129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0729 13:22:19.063718  284129 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-220574 && echo "pause-220574" | sudo tee /etc/hostname
	I0729 13:22:19.196162  284129 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-220574
	
	I0729 13:22:19.196195  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHHostname
	I0729 13:22:19.199140  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.199577  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:19.199610  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.199857  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHPort
	I0729 13:22:19.200096  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:19.200285  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:19.200447  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHUsername
	I0729 13:22:19.200642  284129 main.go:141] libmachine: Using SSH client type: native
	I0729 13:22:19.200906  284129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0729 13:22:19.200930  284129 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-220574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-220574/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-220574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:22:19.318312  284129 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:22:19.318345  284129 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:22:19.318370  284129 buildroot.go:174] setting up certificates
	I0729 13:22:19.318382  284129 provision.go:84] configureAuth start
	I0729 13:22:19.318394  284129 main.go:141] libmachine: (pause-220574) Calling .GetMachineName
	I0729 13:22:19.318711  284129 main.go:141] libmachine: (pause-220574) Calling .GetIP
	I0729 13:22:19.322039  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.322505  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:19.322538  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.322701  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHHostname
	I0729 13:22:19.325149  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.325546  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:19.325575  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.325812  284129 provision.go:143] copyHostCerts
	I0729 13:22:19.325896  284129 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:22:19.325910  284129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:22:19.325983  284129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:22:19.326129  284129 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:22:19.326144  284129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:22:19.326178  284129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:22:19.326275  284129 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:22:19.326287  284129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:22:19.326345  284129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:22:19.326437  284129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.pause-220574 san=[127.0.0.1 192.168.39.207 localhost minikube pause-220574]
	I0729 13:22:19.560291  284129 provision.go:177] copyRemoteCerts
	I0729 13:22:19.560368  284129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:22:19.560393  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHHostname
	I0729 13:22:19.563399  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.563773  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:19.563804  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.564000  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHPort
	I0729 13:22:19.564251  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:19.564453  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHUsername
	I0729 13:22:19.564580  284129 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/pause-220574/id_rsa Username:docker}
	I0729 13:22:19.654395  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:22:19.686448  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:22:19.714352  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0729 13:22:19.741659  284129 provision.go:87] duration metric: took 423.261309ms to configureAuth
	I0729 13:22:19.741690  284129 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:22:19.741908  284129 config.go:182] Loaded profile config "pause-220574": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:22:19.742021  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHHostname
	I0729 13:22:19.744427  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.744831  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:19.744861  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:19.744991  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHPort
	I0729 13:22:19.745268  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:19.745430  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:19.745611  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHUsername
	I0729 13:22:19.745759  284129 main.go:141] libmachine: Using SSH client type: native
	I0729 13:22:19.745924  284129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0729 13:22:19.745938  284129 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:22:25.299875  284129 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:22:25.299909  284129 machine.go:97] duration metric: took 6.362770708s to provisionDockerMachine
	I0729 13:22:25.299926  284129 start.go:293] postStartSetup for "pause-220574" (driver="kvm2")
	I0729 13:22:25.299945  284129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:22:25.299978  284129 main.go:141] libmachine: (pause-220574) Calling .DriverName
	I0729 13:22:25.300359  284129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:22:25.300391  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHHostname
	I0729 13:22:25.303457  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:25.303835  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:25.303863  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:25.304042  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHPort
	I0729 13:22:25.304265  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:25.304450  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHUsername
	I0729 13:22:25.304620  284129 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/pause-220574/id_rsa Username:docker}
	I0729 13:22:25.390189  284129 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:22:25.395234  284129 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:22:25.395265  284129 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:22:25.395345  284129 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:22:25.395439  284129 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:22:25.395551  284129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:22:25.406740  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:22:25.435086  284129 start.go:296] duration metric: took 135.146929ms for postStartSetup
	I0729 13:22:25.435125  284129 fix.go:56] duration metric: took 6.525221278s for fixHost
	I0729 13:22:25.435146  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHHostname
	I0729 13:22:25.437997  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:25.438366  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:25.438409  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:25.438537  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHPort
	I0729 13:22:25.438796  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:25.438984  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:25.439171  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHUsername
	I0729 13:22:25.439394  284129 main.go:141] libmachine: Using SSH client type: native
	I0729 13:22:25.439578  284129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0729 13:22:25.439591  284129 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 13:22:25.549937  284129 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259345.541043191
	
	I0729 13:22:25.549963  284129 fix.go:216] guest clock: 1722259345.541043191
	I0729 13:22:25.549972  284129 fix.go:229] Guest: 2024-07-29 13:22:25.541043191 +0000 UTC Remote: 2024-07-29 13:22:25.435128743 +0000 UTC m=+24.652061049 (delta=105.914448ms)
	I0729 13:22:25.549997  284129 fix.go:200] guest clock delta is within tolerance: 105.914448ms
	I0729 13:22:25.550004  284129 start.go:83] releasing machines lock for "pause-220574", held for 6.640134043s
	I0729 13:22:25.550033  284129 main.go:141] libmachine: (pause-220574) Calling .DriverName
	I0729 13:22:25.550295  284129 main.go:141] libmachine: (pause-220574) Calling .GetIP
	I0729 13:22:25.553251  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:25.553645  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:25.553682  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:25.553883  284129 main.go:141] libmachine: (pause-220574) Calling .DriverName
	I0729 13:22:25.554508  284129 main.go:141] libmachine: (pause-220574) Calling .DriverName
	I0729 13:22:25.554734  284129 main.go:141] libmachine: (pause-220574) Calling .DriverName
	I0729 13:22:25.554833  284129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:22:25.554887  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHHostname
	I0729 13:22:25.554960  284129 ssh_runner.go:195] Run: cat /version.json
	I0729 13:22:25.554988  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHHostname
	I0729 13:22:25.557655  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:25.558003  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:25.558059  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:25.558079  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:25.558262  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHPort
	I0729 13:22:25.558471  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:25.558494  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:25.558539  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:25.558652  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHUsername
	I0729 13:22:25.558733  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHPort
	I0729 13:22:25.558914  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHKeyPath
	I0729 13:22:25.558934  284129 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/pause-220574/id_rsa Username:docker}
	I0729 13:22:25.559048  284129 main.go:141] libmachine: (pause-220574) Calling .GetSSHUsername
	I0729 13:22:25.559158  284129 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/pause-220574/id_rsa Username:docker}
	I0729 13:22:25.638221  284129 ssh_runner.go:195] Run: systemctl --version
	I0729 13:22:25.666152  284129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:22:25.835486  284129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:22:25.841924  284129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:22:25.841995  284129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:22:25.852180  284129 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 13:22:25.852210  284129 start.go:495] detecting cgroup driver to use...
	I0729 13:22:25.852284  284129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:22:25.873288  284129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:22:25.889338  284129 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:22:25.889408  284129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:22:25.905463  284129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:22:25.923690  284129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:22:26.103302  284129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:22:26.238300  284129 docker.go:233] disabling docker service ...
	I0729 13:22:26.238397  284129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:22:26.258421  284129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:22:26.274473  284129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:22:26.409876  284129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:22:26.567414  284129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:22:26.582988  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:22:26.608352  284129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:22:26.608426  284129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:22:26.623716  284129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:22:26.623804  284129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:22:26.635469  284129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:22:26.646990  284129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:22:26.658176  284129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:22:26.670862  284129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:22:26.682637  284129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:22:26.694306  284129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:22:26.706739  284129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:22:26.718024  284129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:22:26.728391  284129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:22:26.865317  284129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:22:31.741398  284129 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.876039903s)
	I0729 13:22:31.741438  284129 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:22:31.741497  284129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:22:31.748143  284129 start.go:563] Will wait 60s for crictl version
	I0729 13:22:31.748214  284129 ssh_runner.go:195] Run: which crictl
	I0729 13:22:31.753559  284129 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:22:31.793669  284129 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:22:31.793765  284129 ssh_runner.go:195] Run: crio --version
	I0729 13:22:31.823741  284129 ssh_runner.go:195] Run: crio --version
	I0729 13:22:31.857823  284129 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:22:31.859183  284129 main.go:141] libmachine: (pause-220574) Calling .GetIP
	I0729 13:22:31.862009  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:31.862484  284129 main.go:141] libmachine: (pause-220574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:d6:ee", ip: ""} in network mk-pause-220574: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:14 +0000 UTC Type:0 Mac:52:54:00:63:d6:ee Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:pause-220574 Clientid:01:52:54:00:63:d6:ee}
	I0729 13:22:31.862513  284129 main.go:141] libmachine: (pause-220574) DBG | domain pause-220574 has defined IP address 192.168.39.207 and MAC address 52:54:00:63:d6:ee in network mk-pause-220574
	I0729 13:22:31.862746  284129 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:22:31.867293  284129 kubeadm.go:883] updating cluster {Name:pause-220574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-220574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:22:31.867437  284129 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:22:31.867506  284129 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:22:31.918174  284129 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:22:31.918199  284129 crio.go:433] Images already preloaded, skipping extraction
	I0729 13:22:31.918264  284129 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:22:31.958964  284129 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:22:31.958992  284129 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:22:31.959002  284129 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.30.3 crio true true} ...
	I0729 13:22:31.959136  284129 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-220574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-220574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:22:31.959221  284129 ssh_runner.go:195] Run: crio config
	I0729 13:22:32.010607  284129 cni.go:84] Creating CNI manager for ""
	I0729 13:22:32.010634  284129 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:22:32.010649  284129 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:22:32.010681  284129 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-220574 NodeName:pause-220574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:22:32.010877  284129 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-220574"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:22:32.010962  284129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:22:32.021439  284129 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:22:32.021508  284129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:22:32.033621  284129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 13:22:32.050248  284129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:22:32.068660  284129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0729 13:22:32.089081  284129 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I0729 13:22:32.093247  284129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:22:32.230730  284129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:22:32.249419  284129 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/pause-220574 for IP: 192.168.39.207
	I0729 13:22:32.249455  284129 certs.go:194] generating shared ca certs ...
	I0729 13:22:32.249478  284129 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:22:32.249649  284129 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:22:32.249697  284129 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:22:32.249709  284129 certs.go:256] generating profile certs ...
	I0729 13:22:32.249808  284129 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/pause-220574/client.key
	I0729 13:22:32.249870  284129 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/pause-220574/apiserver.key.706ec0c6
	I0729 13:22:32.249917  284129 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/pause-220574/proxy-client.key
	I0729 13:22:32.250061  284129 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:22:32.250096  284129 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:22:32.250107  284129 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:22:32.250137  284129 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:22:32.250166  284129 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:22:32.250197  284129 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:22:32.250251  284129 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:22:32.251137  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:22:32.276855  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:22:32.305192  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:22:32.333055  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:22:32.361841  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/pause-220574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 13:22:32.389260  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/pause-220574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:22:32.414101  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/pause-220574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:22:32.437670  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/pause-220574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:22:32.461203  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:22:32.485294  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:22:32.512252  284129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:22:32.543715  284129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:22:32.562301  284129 ssh_runner.go:195] Run: openssl version
	I0729 13:22:32.567847  284129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:22:32.578632  284129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:22:32.583535  284129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:22:32.583613  284129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:22:32.589212  284129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:22:32.599178  284129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:22:32.618960  284129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:22:32.645138  284129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:22:32.645207  284129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:22:32.676631  284129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:22:32.692323  284129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:22:32.731081  284129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:22:32.765016  284129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:22:32.765106  284129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:22:32.782713  284129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:22:32.852801  284129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:22:32.893002  284129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:22:32.937158  284129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:22:32.998861  284129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:22:33.025627  284129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:22:33.061052  284129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:22:33.089092  284129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:22:33.138000  284129 kubeadm.go:392] StartCluster: {Name:pause-220574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-220574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:22:33.138231  284129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:22:33.138323  284129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:22:33.284120  284129 cri.go:89] found id: "8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c"
	I0729 13:22:33.284148  284129 cri.go:89] found id: "25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec"
	I0729 13:22:33.284153  284129 cri.go:89] found id: "3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967"
	I0729 13:22:33.284157  284129 cri.go:89] found id: "bfc43f044d973833a476be05e5920229bb41232d6504fb1e74c079dc6b327409"
	I0729 13:22:33.284161  284129 cri.go:89] found id: "10bddfb733011425dbb2b5f91262bcea17598f0f7b3ee05ecf38981f7f1a1923"
	I0729 13:22:33.284165  284129 cri.go:89] found id: "e6882649b59f199b1721caf1dad3a96bd80350c124126315398c7ef0d630503f"
	I0729 13:22:33.284169  284129 cri.go:89] found id: "f9beea7ed45281df832306978552e70663db7e1a09eda35c301c3845b800095d"
	I0729 13:22:33.284173  284129 cri.go:89] found id: ""
	I0729 13:22:33.284233  284129 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-220574 -n pause-220574
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-220574 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-220574 logs -n 25: (1.678718195s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-614412          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 13:17 UTC | 29 Jul 24 13:19 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p offline-crio-201075             | offline-crio-201075       | jenkins | v1.33.1 | 29 Jul 24 13:17 UTC | 29 Jul 24 13:18 UTC |
	| start   | -p kubernetes-upgrade-375555       | kubernetes-upgrade-375555 | jenkins | v1.33.1 | 29 Jul 24 13:18 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-265470        | force-systemd-env-265470  | jenkins | v1.33.1 | 29 Jul 24 13:18 UTC | 29 Jul 24 13:18 UTC |
	| start   | -p stopped-upgrade-938122          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 13:18 UTC | 29 Jul 24 13:19 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:18 UTC | 29 Jul 24 13:19 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-614412          | running-upgrade-614412    | jenkins | v1.33.1 | 29 Jul 24 13:19 UTC | 29 Jul 24 13:20 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:19 UTC | 29 Jul 24 13:19 UTC |
	| start   | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:19 UTC | 29 Jul 24 13:20 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-938122 stop        | minikube                  | jenkins | v1.26.0 | 29 Jul 24 13:19 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p stopped-upgrade-938122          | stopped-upgrade-938122    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-225538 sudo        | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:21 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-614412          | running-upgrade-614412    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p pause-220574 --memory=2048      | pause-220574              | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:22 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-938122          | stopped-upgrade-938122    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p cert-expiration-168661          | cert-expiration-168661    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:22 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-225538 sudo        | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:21 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:21 UTC | 29 Jul 24 13:21 UTC |
	| start   | -p force-systemd-flag-454180       | force-systemd-flag-454180 | jenkins | v1.33.1 | 29 Jul 24 13:21 UTC | 29 Jul 24 13:22 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-220574                    | pause-220574              | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC | 29 Jul 24 13:23 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-454180 ssh cat  | force-systemd-flag-454180 | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC | 29 Jul 24 13:22 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-454180       | force-systemd-flag-454180 | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC | 29 Jul 24 13:22 UTC |
	| start   | -p cert-options-606292             | cert-options-606292       | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:22:39
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:22:39.798775  284567 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:22:39.798872  284567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:22:39.798875  284567 out.go:304] Setting ErrFile to fd 2...
	I0729 13:22:39.798879  284567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:22:39.799038  284567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:22:39.799609  284567 out.go:298] Setting JSON to false
	I0729 13:22:39.800542  284567 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11103,"bootTime":1722248257,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:22:39.800598  284567 start.go:139] virtualization: kvm guest
	I0729 13:22:39.802660  284567 out.go:177] * [cert-options-606292] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:22:39.804039  284567 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:22:39.804089  284567 notify.go:220] Checking for updates...
	I0729 13:22:39.806648  284567 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:22:39.807937  284567 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:22:39.809186  284567 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:22:39.810370  284567 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:22:39.811580  284567 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:22:39.813260  284567 config.go:182] Loaded profile config "cert-expiration-168661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:22:39.813394  284567 config.go:182] Loaded profile config "kubernetes-upgrade-375555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:22:39.813569  284567 config.go:182] Loaded profile config "pause-220574": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:22:39.813664  284567 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:22:39.849876  284567 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:22:39.851086  284567 start.go:297] selected driver: kvm2
	I0729 13:22:39.851094  284567 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:22:39.851104  284567 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:22:39.851807  284567 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:22:39.851880  284567 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:22:39.867134  284567 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:22:39.867192  284567 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 13:22:39.867414  284567 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 13:22:39.867467  284567 cni.go:84] Creating CNI manager for ""
	I0729 13:22:39.867475  284567 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:22:39.867480  284567 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 13:22:39.867544  284567 start.go:340] cluster config:
	{Name:cert-options-606292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-606292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0729 13:22:39.867633  284567 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:22:39.869257  284567 out.go:177] * Starting "cert-options-606292" primary control-plane node in "cert-options-606292" cluster
	I0729 13:22:39.870270  284567 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:22:39.870296  284567 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:22:39.870313  284567 cache.go:56] Caching tarball of preloaded images
	I0729 13:22:39.870381  284567 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:22:39.870387  284567 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:22:39.870479  284567 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/config.json ...
	I0729 13:22:39.870492  284567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/config.json: {Name:mk0539b7baf5e26571cc6c10e2bd5422f0854491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:22:39.870603  284567 start.go:360] acquireMachinesLock for cert-options-606292: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:22:39.870625  284567 start.go:364] duration metric: took 14.405µs to acquireMachinesLock for "cert-options-606292"
	I0729 13:22:39.870637  284567 start.go:93] Provisioning new machine with config: &{Name:cert-options-606292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-606292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:22:39.870689  284567 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 13:22:39.872091  284567 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 13:22:39.872233  284567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:22:39.872265  284567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:22:39.886546  284567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0729 13:22:39.887011  284567 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:22:39.887534  284567 main.go:141] libmachine: Using API Version  1
	I0729 13:22:39.887549  284567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:22:39.887862  284567 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:22:39.888061  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetMachineName
	I0729 13:22:39.888202  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:22:39.888363  284567 start.go:159] libmachine.API.Create for "cert-options-606292" (driver="kvm2")
	I0729 13:22:39.888386  284567 client.go:168] LocalClient.Create starting
	I0729 13:22:39.888412  284567 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem
	I0729 13:22:39.888439  284567 main.go:141] libmachine: Decoding PEM data...
	I0729 13:22:39.888461  284567 main.go:141] libmachine: Parsing certificate...
	I0729 13:22:39.888513  284567 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem
	I0729 13:22:39.888526  284567 main.go:141] libmachine: Decoding PEM data...
	I0729 13:22:39.888533  284567 main.go:141] libmachine: Parsing certificate...
	I0729 13:22:39.888548  284567 main.go:141] libmachine: Running pre-create checks...
	I0729 13:22:39.888553  284567 main.go:141] libmachine: (cert-options-606292) Calling .PreCreateCheck
	I0729 13:22:39.888949  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetConfigRaw
	I0729 13:22:39.889322  284567 main.go:141] libmachine: Creating machine...
	I0729 13:22:39.889329  284567 main.go:141] libmachine: (cert-options-606292) Calling .Create
	I0729 13:22:39.889455  284567 main.go:141] libmachine: (cert-options-606292) Creating KVM machine...
	I0729 13:22:39.890709  284567 main.go:141] libmachine: (cert-options-606292) DBG | found existing default KVM network
	I0729 13:22:39.891970  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.891820  284590 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7a:5c:66} reservation:<nil>}
	I0729 13:22:39.892756  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.892696  284590 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:12:73:7e} reservation:<nil>}
	I0729 13:22:39.893772  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.893716  284590 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:52:3b} reservation:<nil>}
	I0729 13:22:39.896060  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.895946  284590 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 13:22:39.897287  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.897191  284590 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000404d90}
	I0729 13:22:39.897342  284567 main.go:141] libmachine: (cert-options-606292) DBG | created network xml: 
	I0729 13:22:39.897351  284567 main.go:141] libmachine: (cert-options-606292) DBG | <network>
	I0729 13:22:39.897357  284567 main.go:141] libmachine: (cert-options-606292) DBG |   <name>mk-cert-options-606292</name>
	I0729 13:22:39.897360  284567 main.go:141] libmachine: (cert-options-606292) DBG |   <dns enable='no'/>
	I0729 13:22:39.897365  284567 main.go:141] libmachine: (cert-options-606292) DBG |   
	I0729 13:22:39.897374  284567 main.go:141] libmachine: (cert-options-606292) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0729 13:22:39.897384  284567 main.go:141] libmachine: (cert-options-606292) DBG |     <dhcp>
	I0729 13:22:39.897388  284567 main.go:141] libmachine: (cert-options-606292) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0729 13:22:39.897393  284567 main.go:141] libmachine: (cert-options-606292) DBG |     </dhcp>
	I0729 13:22:39.897396  284567 main.go:141] libmachine: (cert-options-606292) DBG |   </ip>
	I0729 13:22:39.897400  284567 main.go:141] libmachine: (cert-options-606292) DBG |   
	I0729 13:22:39.897403  284567 main.go:141] libmachine: (cert-options-606292) DBG | </network>
	I0729 13:22:39.897408  284567 main.go:141] libmachine: (cert-options-606292) DBG | 
	I0729 13:22:39.902528  284567 main.go:141] libmachine: (cert-options-606292) DBG | trying to create private KVM network mk-cert-options-606292 192.168.83.0/24...
	I0729 13:22:39.969864  284567 main.go:141] libmachine: (cert-options-606292) DBG | private KVM network mk-cert-options-606292 192.168.83.0/24 created
	I0729 13:22:39.969885  284567 main.go:141] libmachine: (cert-options-606292) Setting up store path in /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292 ...
	I0729 13:22:39.969894  284567 main.go:141] libmachine: (cert-options-606292) Building disk image from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:22:39.969901  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.969830  284590 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:22:39.970025  284567 main.go:141] libmachine: (cert-options-606292) Downloading /home/jenkins/minikube-integration/19341-233093/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:22:40.213038  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:40.212909  284590 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa...
	I0729 13:22:40.341964  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:40.341833  284590 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/cert-options-606292.rawdisk...
	I0729 13:22:40.341986  284567 main.go:141] libmachine: (cert-options-606292) DBG | Writing magic tar header
	I0729 13:22:40.341998  284567 main.go:141] libmachine: (cert-options-606292) DBG | Writing SSH key tar header
	I0729 13:22:40.342113  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:40.341997  284590 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292 ...
	I0729 13:22:40.342148  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292
	I0729 13:22:40.342166  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines
	I0729 13:22:40.342181  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:22:40.342193  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292 (perms=drwx------)
	I0729 13:22:40.342205  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:22:40.342211  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube (perms=drwxr-xr-x)
	I0729 13:22:40.342217  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093
	I0729 13:22:40.342235  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:22:40.342240  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:22:40.342248  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home
	I0729 13:22:40.342255  284567 main.go:141] libmachine: (cert-options-606292) DBG | Skipping /home - not owner
	I0729 13:22:40.342264  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093 (perms=drwxrwxr-x)
	I0729 13:22:40.342277  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:22:40.342284  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:22:40.342315  284567 main.go:141] libmachine: (cert-options-606292) Creating domain...
	I0729 13:22:40.343381  284567 main.go:141] libmachine: (cert-options-606292) define libvirt domain using xml: 
	I0729 13:22:40.343389  284567 main.go:141] libmachine: (cert-options-606292) <domain type='kvm'>
	I0729 13:22:40.343394  284567 main.go:141] libmachine: (cert-options-606292)   <name>cert-options-606292</name>
	I0729 13:22:40.343398  284567 main.go:141] libmachine: (cert-options-606292)   <memory unit='MiB'>2048</memory>
	I0729 13:22:40.343402  284567 main.go:141] libmachine: (cert-options-606292)   <vcpu>2</vcpu>
	I0729 13:22:40.343413  284567 main.go:141] libmachine: (cert-options-606292)   <features>
	I0729 13:22:40.343418  284567 main.go:141] libmachine: (cert-options-606292)     <acpi/>
	I0729 13:22:40.343421  284567 main.go:141] libmachine: (cert-options-606292)     <apic/>
	I0729 13:22:40.343425  284567 main.go:141] libmachine: (cert-options-606292)     <pae/>
	I0729 13:22:40.343428  284567 main.go:141] libmachine: (cert-options-606292)     
	I0729 13:22:40.343433  284567 main.go:141] libmachine: (cert-options-606292)   </features>
	I0729 13:22:40.343436  284567 main.go:141] libmachine: (cert-options-606292)   <cpu mode='host-passthrough'>
	I0729 13:22:40.343440  284567 main.go:141] libmachine: (cert-options-606292)   
	I0729 13:22:40.343443  284567 main.go:141] libmachine: (cert-options-606292)   </cpu>
	I0729 13:22:40.343447  284567 main.go:141] libmachine: (cert-options-606292)   <os>
	I0729 13:22:40.343452  284567 main.go:141] libmachine: (cert-options-606292)     <type>hvm</type>
	I0729 13:22:40.343465  284567 main.go:141] libmachine: (cert-options-606292)     <boot dev='cdrom'/>
	I0729 13:22:40.343470  284567 main.go:141] libmachine: (cert-options-606292)     <boot dev='hd'/>
	I0729 13:22:40.343478  284567 main.go:141] libmachine: (cert-options-606292)     <bootmenu enable='no'/>
	I0729 13:22:40.343487  284567 main.go:141] libmachine: (cert-options-606292)   </os>
	I0729 13:22:40.343493  284567 main.go:141] libmachine: (cert-options-606292)   <devices>
	I0729 13:22:40.343498  284567 main.go:141] libmachine: (cert-options-606292)     <disk type='file' device='cdrom'>
	I0729 13:22:40.343508  284567 main.go:141] libmachine: (cert-options-606292)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/boot2docker.iso'/>
	I0729 13:22:40.343513  284567 main.go:141] libmachine: (cert-options-606292)       <target dev='hdc' bus='scsi'/>
	I0729 13:22:40.343532  284567 main.go:141] libmachine: (cert-options-606292)       <readonly/>
	I0729 13:22:40.343546  284567 main.go:141] libmachine: (cert-options-606292)     </disk>
	I0729 13:22:40.343552  284567 main.go:141] libmachine: (cert-options-606292)     <disk type='file' device='disk'>
	I0729 13:22:40.343558  284567 main.go:141] libmachine: (cert-options-606292)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:22:40.343578  284567 main.go:141] libmachine: (cert-options-606292)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/cert-options-606292.rawdisk'/>
	I0729 13:22:40.343582  284567 main.go:141] libmachine: (cert-options-606292)       <target dev='hda' bus='virtio'/>
	I0729 13:22:40.343586  284567 main.go:141] libmachine: (cert-options-606292)     </disk>
	I0729 13:22:40.343591  284567 main.go:141] libmachine: (cert-options-606292)     <interface type='network'>
	I0729 13:22:40.343596  284567 main.go:141] libmachine: (cert-options-606292)       <source network='mk-cert-options-606292'/>
	I0729 13:22:40.343602  284567 main.go:141] libmachine: (cert-options-606292)       <model type='virtio'/>
	I0729 13:22:40.343606  284567 main.go:141] libmachine: (cert-options-606292)     </interface>
	I0729 13:22:40.343609  284567 main.go:141] libmachine: (cert-options-606292)     <interface type='network'>
	I0729 13:22:40.343614  284567 main.go:141] libmachine: (cert-options-606292)       <source network='default'/>
	I0729 13:22:40.343621  284567 main.go:141] libmachine: (cert-options-606292)       <model type='virtio'/>
	I0729 13:22:40.343626  284567 main.go:141] libmachine: (cert-options-606292)     </interface>
	I0729 13:22:40.343629  284567 main.go:141] libmachine: (cert-options-606292)     <serial type='pty'>
	I0729 13:22:40.343633  284567 main.go:141] libmachine: (cert-options-606292)       <target port='0'/>
	I0729 13:22:40.343636  284567 main.go:141] libmachine: (cert-options-606292)     </serial>
	I0729 13:22:40.343644  284567 main.go:141] libmachine: (cert-options-606292)     <console type='pty'>
	I0729 13:22:40.343647  284567 main.go:141] libmachine: (cert-options-606292)       <target type='serial' port='0'/>
	I0729 13:22:40.343651  284567 main.go:141] libmachine: (cert-options-606292)     </console>
	I0729 13:22:40.343655  284567 main.go:141] libmachine: (cert-options-606292)     <rng model='virtio'>
	I0729 13:22:40.343664  284567 main.go:141] libmachine: (cert-options-606292)       <backend model='random'>/dev/random</backend>
	I0729 13:22:40.343667  284567 main.go:141] libmachine: (cert-options-606292)     </rng>
	I0729 13:22:40.343671  284567 main.go:141] libmachine: (cert-options-606292)     
	I0729 13:22:40.343674  284567 main.go:141] libmachine: (cert-options-606292)     
	I0729 13:22:40.343679  284567 main.go:141] libmachine: (cert-options-606292)   </devices>
	I0729 13:22:40.343682  284567 main.go:141] libmachine: (cert-options-606292) </domain>
	I0729 13:22:40.343705  284567 main.go:141] libmachine: (cert-options-606292) 
	I0729 13:22:40.348040  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:a9:71:c3 in network default
	I0729 13:22:40.348642  284567 main.go:141] libmachine: (cert-options-606292) Ensuring networks are active...
	I0729 13:22:40.348665  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:40.349526  284567 main.go:141] libmachine: (cert-options-606292) Ensuring network default is active
	I0729 13:22:40.349787  284567 main.go:141] libmachine: (cert-options-606292) Ensuring network mk-cert-options-606292 is active
	I0729 13:22:40.350447  284567 main.go:141] libmachine: (cert-options-606292) Getting domain xml...
	I0729 13:22:40.351198  284567 main.go:141] libmachine: (cert-options-606292) Creating domain...
	I0729 13:22:41.575568  284567 main.go:141] libmachine: (cert-options-606292) Waiting to get IP...
	I0729 13:22:41.576251  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:41.576674  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:41.576689  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:41.576643  284590 retry.go:31] will retry after 290.665118ms: waiting for machine to come up
	I0729 13:22:41.869224  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:41.869839  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:41.869895  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:41.869801  284590 retry.go:31] will retry after 296.639683ms: waiting for machine to come up
	I0729 13:22:42.168394  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:42.168857  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:42.168874  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:42.168811  284590 retry.go:31] will retry after 464.665086ms: waiting for machine to come up
	I0729 13:22:42.635134  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:42.635733  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:42.635768  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:42.635694  284590 retry.go:31] will retry after 592.412679ms: waiting for machine to come up
	I0729 13:22:43.230069  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:43.230582  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:43.230615  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:43.230511  284590 retry.go:31] will retry after 596.348698ms: waiting for machine to come up
	I0729 13:22:43.828062  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:43.828551  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:43.828574  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:43.828487  284590 retry.go:31] will retry after 614.144629ms: waiting for machine to come up
	I0729 13:22:44.444343  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:44.444926  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:44.444950  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:44.444891  284590 retry.go:31] will retry after 1.07436479s: waiting for machine to come up
	I0729 13:22:44.426950  284129 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6 93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566 65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb 8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c 25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec 3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967 bfc43f044d973833a476be05e5920229bb41232d6504fb1e74c079dc6b327409 10bddfb733011425dbb2b5f91262bcea17598f0f7b3ee05ecf38981f7f1a1923 e6882649b59f199b1721caf1dad3a96bd80350c124126315398c7ef0d630503f f9beea7ed45281df832306978552e70663db7e1a09eda35c301c3845b800095d: (10.749304512s)
	W0729 13:22:44.427055  284129 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6 93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566 65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb 8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c 25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec 3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967 bfc43f044d973833a476be05e5920229bb41232d6504fb1e74c079dc6b327409 10bddfb733011425dbb2b5f91262bcea17598f0f7b3ee05ecf38981f7f1a1923 e6882649b59f199b1721caf1dad3a96bd80350c124126315398c7ef0d630503f f9beea7ed45281df832306978552e70663db7e1a09eda35c301c3845b800095d: Process exited with status 1
	stdout:
	8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6
	93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e
	f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566
	65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb
	8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c
	25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec
	
	stderr:
	E0729 13:22:44.415732    2892 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967\": container with ID starting with 3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967 not found: ID does not exist" containerID="3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967"
	time="2024-07-29T13:22:44Z" level=fatal msg="stopping the container \"3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967\": rpc error: code = NotFound desc = could not find container \"3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967\": container with ID starting with 3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967 not found: ID does not exist"
	I0729 13:22:44.427132  284129 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:22:44.475094  284129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:22:44.486515  284129 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Jul 29 13:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jul 29 13:21 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 29 13:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jul 29 13:21 /etc/kubernetes/scheduler.conf
	
	I0729 13:22:44.486593  284129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:22:44.497586  284129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:22:44.508180  284129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:22:44.518176  284129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:22:44.518234  284129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:22:44.533035  284129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:22:44.542626  284129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:22:44.542676  284129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:22:44.552183  284129 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:22:44.561813  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:44.630531  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:45.442990  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:45.669945  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:45.743154  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:45.520387  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:45.520954  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:45.520976  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:45.520889  284590 retry.go:31] will retry after 1.115450205s: waiting for machine to come up
	I0729 13:22:46.638169  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:46.638717  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:46.638737  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:46.638671  284590 retry.go:31] will retry after 1.484431536s: waiting for machine to come up
	I0729 13:22:48.124352  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:48.124915  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:48.124937  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:48.124868  284590 retry.go:31] will retry after 1.936812423s: waiting for machine to come up
	I0729 13:22:45.851002  284129 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:22:45.851127  284129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:22:46.351880  284129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:22:46.852195  284129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:22:46.866675  284129 api_server.go:72] duration metric: took 1.015671524s to wait for apiserver process to appear ...
	I0729 13:22:46.866708  284129 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:22:46.866733  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:22:49.382419  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:22:49.382459  284129 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:22:49.382481  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:22:49.436014  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:22:49.436052  284129 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:22:49.867224  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:22:49.880857  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:22:49.880903  284129 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:22:50.367340  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:22:50.373322  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:22:50.373350  284129 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:22:50.867362  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:22:50.871797  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 200:
	ok
	I0729 13:22:50.880804  284129 api_server.go:141] control plane version: v1.30.3
	I0729 13:22:50.880839  284129 api_server.go:131] duration metric: took 4.014121842s to wait for apiserver health ...
	I0729 13:22:50.880851  284129 cni.go:84] Creating CNI manager for ""
	I0729 13:22:50.880860  284129 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:22:50.882232  284129 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:22:50.063498  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:50.063967  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:50.063990  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:50.063946  284590 retry.go:31] will retry after 2.118498254s: waiting for machine to come up
	I0729 13:22:52.183912  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:52.184387  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:52.184409  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:52.184348  284590 retry.go:31] will retry after 3.566642473s: waiting for machine to come up
	I0729 13:22:50.883485  284129 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:22:50.899181  284129 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:22:50.924359  284129 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:22:50.936737  284129 system_pods.go:59] 6 kube-system pods found
	I0729 13:22:50.936775  284129 system_pods.go:61] "coredns-7db6d8ff4d-8k5vv" [1389db61-0ea2-41a7-bc84-b8b0a234e2d6] Running
	I0729 13:22:50.936788  284129 system_pods.go:61] "etcd-pause-220574" [5eb79bac-2629-42d3-aaa9-b43b2e52400f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:22:50.936814  284129 system_pods.go:61] "kube-apiserver-pause-220574" [247fbc96-2edb-4e03-bbbd-6426889e69b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:22:50.936825  284129 system_pods.go:61] "kube-controller-manager-pause-220574" [1d9139fe-74e9-4d9d-a2e4-03324fcd2c42] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:22:50.936834  284129 system_pods.go:61] "kube-proxy-9x2zj" [d102922f-5f2c-4f39-9ef4-698b8a4200b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:22:50.936844  284129 system_pods.go:61] "kube-scheduler-pause-220574" [d965087c-3020-4d98-8f81-022e403ae53b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:22:50.936865  284129 system_pods.go:74] duration metric: took 12.474896ms to wait for pod list to return data ...
	I0729 13:22:50.936874  284129 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:22:50.948409  284129 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:22:50.948503  284129 node_conditions.go:123] node cpu capacity is 2
	I0729 13:22:50.948529  284129 node_conditions.go:105] duration metric: took 11.64855ms to run NodePressure ...
	I0729 13:22:50.948551  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:51.244348  284129 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:22:51.248834  284129 kubeadm.go:739] kubelet initialised
	I0729 13:22:51.248856  284129 kubeadm.go:740] duration metric: took 4.477627ms waiting for restarted kubelet to initialise ...
	I0729 13:22:51.248867  284129 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:22:51.253248  284129 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace to be "Ready" ...
	I0729 13:22:51.258527  284129 pod_ready.go:92] pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace has status "Ready":"True"
	I0729 13:22:51.258552  284129 pod_ready.go:81] duration metric: took 5.277498ms for pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace to be "Ready" ...
	I0729 13:22:51.258563  284129 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:22:53.268486  284129 pod_ready.go:102] pod "etcd-pause-220574" in "kube-system" namespace has status "Ready":"False"
	I0729 13:22:55.765402  284129 pod_ready.go:102] pod "etcd-pause-220574" in "kube-system" namespace has status "Ready":"False"
	I0729 13:22:55.752878  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:55.753314  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:55.753362  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:55.753292  284590 retry.go:31] will retry after 2.761086634s: waiting for machine to come up
	I0729 13:22:58.517911  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:58.518379  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:58.518401  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:58.518325  284590 retry.go:31] will retry after 5.085557201s: waiting for machine to come up
	I0729 13:22:57.765496  284129 pod_ready.go:102] pod "etcd-pause-220574" in "kube-system" namespace has status "Ready":"False"
	I0729 13:22:59.765412  284129 pod_ready.go:92] pod "etcd-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:22:59.765439  284129 pod_ready.go:81] duration metric: took 8.50686832s for pod "etcd-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:22:59.765451  284129 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:03.606100  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.606653  284567 main.go:141] libmachine: (cert-options-606292) Found IP for machine: 192.168.83.228
	I0729 13:23:03.606822  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has current primary IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.606837  284567 main.go:141] libmachine: (cert-options-606292) Reserving static IP address...
	I0729 13:23:03.607124  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find host DHCP lease matching {name: "cert-options-606292", mac: "52:54:00:7a:a6:d0", ip: "192.168.83.228"} in network mk-cert-options-606292
	I0729 13:23:03.681529  284567 main.go:141] libmachine: (cert-options-606292) DBG | Getting to WaitForSSH function...
	I0729 13:23:03.681553  284567 main.go:141] libmachine: (cert-options-606292) Reserved static IP address: 192.168.83.228
	I0729 13:23:03.681566  284567 main.go:141] libmachine: (cert-options-606292) Waiting for SSH to be available...
	I0729 13:23:03.684280  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.684659  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:03.684702  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.684909  284567 main.go:141] libmachine: (cert-options-606292) DBG | Using SSH client type: external
	I0729 13:23:03.684932  284567 main.go:141] libmachine: (cert-options-606292) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa (-rw-------)
	I0729 13:23:03.684958  284567 main.go:141] libmachine: (cert-options-606292) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:23:03.684971  284567 main.go:141] libmachine: (cert-options-606292) DBG | About to run SSH command:
	I0729 13:23:03.684982  284567 main.go:141] libmachine: (cert-options-606292) DBG | exit 0
	I0729 13:23:03.817148  284567 main.go:141] libmachine: (cert-options-606292) DBG | SSH cmd err, output: <nil>: 
	I0729 13:23:03.817397  284567 main.go:141] libmachine: (cert-options-606292) KVM machine creation complete!
	I0729 13:23:03.817751  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetConfigRaw
	I0729 13:23:03.818332  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:03.818497  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:03.818648  284567 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:23:03.818657  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetState
	I0729 13:23:03.819797  284567 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:23:03.819804  284567 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:23:03.819808  284567 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:23:03.819814  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:03.822337  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.822689  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:03.822726  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.822826  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:03.822960  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:03.823093  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:03.823250  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:03.823455  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:03.823644  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:03.823650  284567 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:23:03.936156  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:23:03.936172  284567 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:23:03.936178  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:03.939013  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.939398  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:03.939422  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.939710  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:03.939951  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:03.940105  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:03.940276  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:03.940468  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:03.940671  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:03.940679  284567 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:23:04.053840  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:23:04.053913  284567 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:23:04.053918  284567 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:23:04.053925  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetMachineName
	I0729 13:23:04.054189  284567 buildroot.go:166] provisioning hostname "cert-options-606292"
	I0729 13:23:04.054217  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetMachineName
	I0729 13:23:04.054423  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.057336  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.057730  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.057754  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.057992  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:04.058211  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.058380  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.058499  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:04.058627  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:04.058789  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:04.058795  284567 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-606292 && echo "cert-options-606292" | sudo tee /etc/hostname
	I0729 13:23:04.187606  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-606292
	
	I0729 13:23:04.187629  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.190421  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.190794  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.190812  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.190988  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:04.191216  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.191364  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.191529  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:04.191792  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:04.191978  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:04.191989  284567 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-606292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-606292/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-606292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:23:04.314260  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:23:04.314281  284567 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:23:04.314323  284567 buildroot.go:174] setting up certificates
	I0729 13:23:04.314332  284567 provision.go:84] configureAuth start
	I0729 13:23:04.314341  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetMachineName
	I0729 13:23:04.314616  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetIP
	I0729 13:23:04.318136  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.318554  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.318592  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.318774  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.320778  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.321152  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.321169  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.321312  284567 provision.go:143] copyHostCerts
	I0729 13:23:04.321367  284567 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:23:04.321398  284567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:23:04.322337  284567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:23:04.322468  284567 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:23:04.322474  284567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:23:04.322503  284567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:23:04.322552  284567 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:23:04.322555  284567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:23:04.322575  284567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:23:04.322613  284567 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.cert-options-606292 san=[127.0.0.1 192.168.83.228 cert-options-606292 localhost minikube]
	I0729 13:23:04.426488  284567 provision.go:177] copyRemoteCerts
	I0729 13:23:04.426541  284567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:23:04.426565  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.429429  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.429794  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.429811  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.429966  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:04.430187  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.430319  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:04.430457  284567 sshutil.go:53] new ssh client: &{IP:192.168.83.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa Username:docker}
	I0729 13:23:04.519182  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:23:04.555875  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 13:23:04.581388  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:23:04.606307  284567 provision.go:87] duration metric: took 291.963109ms to configureAuth
	I0729 13:23:04.606326  284567 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:23:04.606513  284567 config.go:182] Loaded profile config "cert-options-606292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:23:04.606577  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.609302  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.609697  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.609721  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.609923  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:04.610123  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.610287  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.610455  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:04.610608  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:04.610830  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:04.610847  284567 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:23:01.773090  284129 pod_ready.go:102] pod "kube-apiserver-pause-220574" in "kube-system" namespace has status "Ready":"False"
	I0729 13:23:04.271365  284129 pod_ready.go:102] pod "kube-apiserver-pause-220574" in "kube-system" namespace has status "Ready":"False"
	I0729 13:23:05.772462  284129 pod_ready.go:92] pod "kube-apiserver-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:05.772487  284129 pod_ready.go:81] duration metric: took 6.007029665s for pod "kube-apiserver-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.772497  284129 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.777416  284129 pod_ready.go:92] pod "kube-controller-manager-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:05.777438  284129 pod_ready.go:81] duration metric: took 4.93488ms for pod "kube-controller-manager-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.777453  284129 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9x2zj" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.782530  284129 pod_ready.go:92] pod "kube-proxy-9x2zj" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:05.782551  284129 pod_ready.go:81] duration metric: took 5.091325ms for pod "kube-proxy-9x2zj" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.782559  284129 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.787500  284129 pod_ready.go:92] pod "kube-scheduler-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:05.787525  284129 pod_ready.go:81] duration metric: took 4.959545ms for pod "kube-scheduler-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.787533  284129 pod_ready.go:38] duration metric: took 14.53865416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:23:05.787555  284129 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:23:05.802804  284129 ops.go:34] apiserver oom_adj: -16
	I0729 13:23:05.802831  284129 kubeadm.go:597] duration metric: took 32.314694038s to restartPrimaryControlPlane
	I0729 13:23:05.802846  284129 kubeadm.go:394] duration metric: took 32.6648974s to StartCluster
	I0729 13:23:05.802871  284129 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:05.802962  284129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:23:05.803901  284129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:05.804188  284129 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:23:05.804436  284129 config.go:182] Loaded profile config "pause-220574": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:23:05.804416  284129 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:23:05.806965  284129 out.go:177] * Verifying Kubernetes components...
	I0729 13:23:05.807002  284129 out.go:177] * Enabled addons: 
	I0729 13:23:05.808358  284129 addons.go:510] duration metric: took 3.941625ms for enable addons: enabled=[]
	I0729 13:23:05.808372  284129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:23:04.891655  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:23:04.891670  284567 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:23:04.891677  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetURL
	I0729 13:23:04.893058  284567 main.go:141] libmachine: (cert-options-606292) DBG | Using libvirt version 6000000
	I0729 13:23:04.895233  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.895593  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.895617  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.895755  284567 main.go:141] libmachine: Docker is up and running!
	I0729 13:23:04.895762  284567 main.go:141] libmachine: Reticulating splines...
	I0729 13:23:04.895773  284567 client.go:171] duration metric: took 25.007375238s to LocalClient.Create
	I0729 13:23:04.895799  284567 start.go:167] duration metric: took 25.007438657s to libmachine.API.Create "cert-options-606292"
	I0729 13:23:04.895816  284567 start.go:293] postStartSetup for "cert-options-606292" (driver="kvm2")
	I0729 13:23:04.895826  284567 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:23:04.895844  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:04.896059  284567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:23:04.896078  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.898453  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.898840  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.898861  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.898981  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:04.899179  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.899342  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:04.899484  284567 sshutil.go:53] new ssh client: &{IP:192.168.83.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa Username:docker}
	I0729 13:23:04.988086  284567 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:23:04.992180  284567 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:23:04.992197  284567 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:23:04.992340  284567 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:23:04.992430  284567 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:23:04.992540  284567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:23:05.002472  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:23:05.026227  284567 start.go:296] duration metric: took 130.399957ms for postStartSetup
	I0729 13:23:05.026266  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetConfigRaw
	I0729 13:23:05.026909  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetIP
	I0729 13:23:05.029747  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.030086  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:05.030106  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.030474  284567 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/config.json ...
	I0729 13:23:05.030688  284567 start.go:128] duration metric: took 25.159989393s to createHost
	I0729 13:23:05.030707  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:05.032991  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.033351  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:05.033372  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.033554  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:05.033763  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:05.033959  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:05.034120  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:05.034290  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:05.034501  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:05.034507  284567 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:23:05.154024  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259385.129923257
	
	I0729 13:23:05.154040  284567 fix.go:216] guest clock: 1722259385.129923257
	I0729 13:23:05.154049  284567 fix.go:229] Guest: 2024-07-29 13:23:05.129923257 +0000 UTC Remote: 2024-07-29 13:23:05.030695472 +0000 UTC m=+25.267780610 (delta=99.227785ms)
	I0729 13:23:05.154086  284567 fix.go:200] guest clock delta is within tolerance: 99.227785ms
	I0729 13:23:05.154090  284567 start.go:83] releasing machines lock for "cert-options-606292", held for 25.283460413s
	I0729 13:23:05.154111  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:05.154392  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetIP
	I0729 13:23:05.157250  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.157670  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:05.157697  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.157875  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:05.158393  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:05.158600  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:05.158705  284567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:23:05.158750  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:05.158847  284567 ssh_runner.go:195] Run: cat /version.json
	I0729 13:23:05.158866  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:05.161512  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.161806  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.161834  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:05.161851  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.161994  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:05.162186  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:05.162245  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:05.162264  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.162362  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:05.162433  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:05.162515  284567 sshutil.go:53] new ssh client: &{IP:192.168.83.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa Username:docker}
	I0729 13:23:05.162604  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:05.162781  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:05.162921  284567 sshutil.go:53] new ssh client: &{IP:192.168.83.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa Username:docker}
	I0729 13:23:05.263611  284567 ssh_runner.go:195] Run: systemctl --version
	I0729 13:23:05.270028  284567 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:23:05.436114  284567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:23:05.442864  284567 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:23:05.442918  284567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:23:05.458310  284567 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:23:05.458326  284567 start.go:495] detecting cgroup driver to use...
	I0729 13:23:05.458381  284567 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:23:05.475082  284567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:23:05.488538  284567 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:23:05.488582  284567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:23:05.503454  284567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:23:05.518311  284567 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:23:05.637743  284567 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:23:05.808772  284567 docker.go:233] disabling docker service ...
	I0729 13:23:05.808835  284567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:23:05.827156  284567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:23:05.840232  284567 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:23:05.964238  284567 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:23:06.100990  284567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:23:06.115815  284567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:23:06.135787  284567 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:23:06.135840  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.145774  284567 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:23:06.145839  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.156083  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.165976  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.176463  284567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:23:06.187707  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.198557  284567 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.217006  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.227480  284567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:23:06.237191  284567 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:23:06.237238  284567 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:23:06.249970  284567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:23:06.260110  284567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:23:06.386423  284567 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:23:06.526332  284567 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:23:06.526414  284567 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:23:06.531262  284567 start.go:563] Will wait 60s for crictl version
	I0729 13:23:06.531319  284567 ssh_runner.go:195] Run: which crictl
	I0729 13:23:06.535090  284567 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:23:06.576748  284567 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:23:06.576865  284567 ssh_runner.go:195] Run: crio --version
	I0729 13:23:06.607446  284567 ssh_runner.go:195] Run: crio --version
	I0729 13:23:06.640321  284567 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:23:06.012698  284129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:23:06.033843  284129 node_ready.go:35] waiting up to 6m0s for node "pause-220574" to be "Ready" ...
	I0729 13:23:06.037885  284129 node_ready.go:49] node "pause-220574" has status "Ready":"True"
	I0729 13:23:06.037906  284129 node_ready.go:38] duration metric: took 4.01969ms for node "pause-220574" to be "Ready" ...
	I0729 13:23:06.037915  284129 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:23:06.045262  284129 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.170370  284129 pod_ready.go:92] pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:06.170395  284129 pod_ready.go:81] duration metric: took 125.110945ms for pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.170405  284129 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.570907  284129 pod_ready.go:92] pod "etcd-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:06.570941  284129 pod_ready.go:81] duration metric: took 400.52756ms for pod "etcd-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.570958  284129 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.970615  284129 pod_ready.go:92] pod "kube-apiserver-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:06.970644  284129 pod_ready.go:81] duration metric: took 399.678679ms for pod "kube-apiserver-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.970654  284129 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:07.371066  284129 pod_ready.go:92] pod "kube-controller-manager-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:07.371098  284129 pod_ready.go:81] duration metric: took 400.435224ms for pod "kube-controller-manager-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:07.371115  284129 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9x2zj" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:07.774353  284129 pod_ready.go:92] pod "kube-proxy-9x2zj" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:07.774425  284129 pod_ready.go:81] duration metric: took 403.294082ms for pod "kube-proxy-9x2zj" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:07.774444  284129 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:08.170349  284129 pod_ready.go:92] pod "kube-scheduler-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:08.170383  284129 pod_ready.go:81] duration metric: took 395.930424ms for pod "kube-scheduler-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:08.170395  284129 pod_ready.go:38] duration metric: took 2.13246831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:23:08.170414  284129 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:23:08.170482  284129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:23:08.185160  284129 api_server.go:72] duration metric: took 2.380925489s to wait for apiserver process to appear ...
	I0729 13:23:08.185193  284129 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:23:08.185219  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:23:08.189672  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 200:
	ok
	I0729 13:23:08.190666  284129 api_server.go:141] control plane version: v1.30.3
	I0729 13:23:08.190693  284129 api_server.go:131] duration metric: took 5.490899ms to wait for apiserver health ...
	I0729 13:23:08.190704  284129 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:23:08.372302  284129 system_pods.go:59] 6 kube-system pods found
	I0729 13:23:08.372334  284129 system_pods.go:61] "coredns-7db6d8ff4d-8k5vv" [1389db61-0ea2-41a7-bc84-b8b0a234e2d6] Running
	I0729 13:23:08.372338  284129 system_pods.go:61] "etcd-pause-220574" [5eb79bac-2629-42d3-aaa9-b43b2e52400f] Running
	I0729 13:23:08.372342  284129 system_pods.go:61] "kube-apiserver-pause-220574" [247fbc96-2edb-4e03-bbbd-6426889e69b2] Running
	I0729 13:23:08.372345  284129 system_pods.go:61] "kube-controller-manager-pause-220574" [1d9139fe-74e9-4d9d-a2e4-03324fcd2c42] Running
	I0729 13:23:08.372348  284129 system_pods.go:61] "kube-proxy-9x2zj" [d102922f-5f2c-4f39-9ef4-698b8a4200b2] Running
	I0729 13:23:08.372351  284129 system_pods.go:61] "kube-scheduler-pause-220574" [d965087c-3020-4d98-8f81-022e403ae53b] Running
	I0729 13:23:08.372357  284129 system_pods.go:74] duration metric: took 181.645596ms to wait for pod list to return data ...
	I0729 13:23:08.372365  284129 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:23:08.570435  284129 default_sa.go:45] found service account: "default"
	I0729 13:23:08.570476  284129 default_sa.go:55] duration metric: took 198.103554ms for default service account to be created ...
	I0729 13:23:08.570491  284129 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:23:08.774169  284129 system_pods.go:86] 6 kube-system pods found
	I0729 13:23:08.774213  284129 system_pods.go:89] "coredns-7db6d8ff4d-8k5vv" [1389db61-0ea2-41a7-bc84-b8b0a234e2d6] Running
	I0729 13:23:08.774222  284129 system_pods.go:89] "etcd-pause-220574" [5eb79bac-2629-42d3-aaa9-b43b2e52400f] Running
	I0729 13:23:08.774229  284129 system_pods.go:89] "kube-apiserver-pause-220574" [247fbc96-2edb-4e03-bbbd-6426889e69b2] Running
	I0729 13:23:08.774237  284129 system_pods.go:89] "kube-controller-manager-pause-220574" [1d9139fe-74e9-4d9d-a2e4-03324fcd2c42] Running
	I0729 13:23:08.774244  284129 system_pods.go:89] "kube-proxy-9x2zj" [d102922f-5f2c-4f39-9ef4-698b8a4200b2] Running
	I0729 13:23:08.774252  284129 system_pods.go:89] "kube-scheduler-pause-220574" [d965087c-3020-4d98-8f81-022e403ae53b] Running
	I0729 13:23:08.774262  284129 system_pods.go:126] duration metric: took 203.763933ms to wait for k8s-apps to be running ...
	I0729 13:23:08.774277  284129 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:23:08.774348  284129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:23:08.791974  284129 system_svc.go:56] duration metric: took 17.68917ms WaitForService to wait for kubelet
	I0729 13:23:08.792005  284129 kubeadm.go:582] duration metric: took 2.987777593s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:23:08.792039  284129 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:23:08.972440  284129 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:23:08.972475  284129 node_conditions.go:123] node cpu capacity is 2
	I0729 13:23:08.972491  284129 node_conditions.go:105] duration metric: took 180.445302ms to run NodePressure ...
	I0729 13:23:08.972507  284129 start.go:241] waiting for startup goroutines ...
	I0729 13:23:08.972516  284129 start.go:246] waiting for cluster config update ...
	I0729 13:23:08.972526  284129 start.go:255] writing updated cluster config ...
	I0729 13:23:08.972948  284129 ssh_runner.go:195] Run: rm -f paused
	I0729 13:23:09.024248  284129 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:23:09.026464  284129 out.go:177] * Done! kubectl is now configured to use "pause-220574" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.824419507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f823ec9-e173-4073-b093-779852969918 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.826907356Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03a8d251-163d-4b86-a7de-23de4c5f6b1c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.827561394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259389827528260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03a8d251-163d-4b86-a7de-23de4c5f6b1c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.828533634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f4640ed-44e8-4ff6-b9f5-c56c8e323afb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.828615661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f4640ed-44e8-4ff6-b9f5-c56c8e323afb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.828906092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d75a30cce700c4f1699456d16608916836a618fcd0c0306fefa1c8633081f9ef,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722259370130658881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d4387,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237260430fe494f50be342e9b03c7934c8a09ed0740301a42d31184e77897fe4,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259366304263420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321760ac6f4786a9c184d355acc68e10734bfb055d2965a0b04e423058bd7605,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259366287668428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98bb4982435163f49f078195b3ae7ba1e10894e69d74509fbba7068018ed6bc,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259366272536792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0b245a66c1a0716d5e1d33a0e8a9b079979321cf0aec1b77d72d12c4b52fb2,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259363845132807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:781af45652d87093aa92abc5b24702fe7121eec1f9bf18d958bfed9311ad6433,PodSandboxId:eede76f4fd7d3817cdcc54770a806dc612acbc1ff55937825aa304f4ac6f10bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259353814633727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259353014544583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d43
87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259353070888842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259353042555496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722259353030752321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722259353005228395,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec,PodSandboxId:288281f9786257fdd11206cf711f9da9a17513993978669c2e4aaf9f76ffd2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259317455633280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f4640ed-44e8-4ff6-b9f5-c56c8e323afb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.857705067Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39fcd23c-5693-4a96-9890-56694320a370 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.858033226Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:eede76f4fd7d3817cdcc54770a806dc612acbc1ff55937825aa304f4ac6f10bf,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-8k5vv,Uid:1389db61-0ea2-41a7-bc84-b8b0a234e2d6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722259352872603214,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:21:55.358666173Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&PodSandboxMetadata{Name:kube-proxy-9x2zj,Uid:d102922f-5f2c-4f39-9ef4-698b8a4200b2,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1722259352706912570,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:21:54.501075235Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-220574,Uid:a25ca1ae6eb5dc4f9baa8714fd089404,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722259352699108759,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,tier: control-plane,},Annotations:map[string
]string{kubernetes.io/config.hash: a25ca1ae6eb5dc4f9baa8714fd089404,kubernetes.io/config.seen: 2024-07-29T13:21:41.367962010Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-220574,Uid:9dc17ea22abaca2d068b5bdb8a70355e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722259352682091752,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.207:8443,kubernetes.io/config.hash: 9dc17ea22abaca2d068b5bdb8a70355e,kubernetes.io/config.seen: 2024-07-29T13:21:41.367956344Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de86232cbb45eb4b5e0d9c4
be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-220574,Uid:2cb59befc5cf5cf66209f443f45a9883,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722259352661767642,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2cb59befc5cf5cf66209f443f45a9883,kubernetes.io/config.seen: 2024-07-29T13:21:41.367960898Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&PodSandboxMetadata{Name:etcd-pause-220574,Uid:015b3e34b26c6c4d5abba1f6270310bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722259352634430461,Labels:map[string]string{component: etcd,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.207:2379,kubernetes.io/config.hash: 015b3e34b26c6c4d5abba1f6270310bc,kubernetes.io/config.seen: 2024-07-29T13:21:41.367951750Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=39fcd23c-5693-4a96-9890-56694320a370 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.859026715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=028de2fa-ad48-402b-a3c2-43619c524caf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.859126401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=028de2fa-ad48-402b-a3c2-43619c524caf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.859322854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d75a30cce700c4f1699456d16608916836a618fcd0c0306fefa1c8633081f9ef,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722259370130658881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d4387,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237260430fe494f50be342e9b03c7934c8a09ed0740301a42d31184e77897fe4,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259366304263420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321760ac6f4786a9c184d355acc68e10734bfb055d2965a0b04e423058bd7605,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259366287668428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98bb4982435163f49f078195b3ae7ba1e10894e69d74509fbba7068018ed6bc,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259366272536792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0b245a66c1a0716d5e1d33a0e8a9b079979321cf0aec1b77d72d12c4b52fb2,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259363845132807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:781af45652d87093aa92abc5b24702fe7121eec1f9bf18d958bfed9311ad6433,PodSandboxId:eede76f4fd7d3817cdcc54770a806dc612acbc1ff55937825aa304f4ac6f10bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259353814633727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=028de2fa-ad48-402b-a3c2-43619c524caf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.902202979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49d42773-17c0-4e06-818c-4a0c759661fe name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.902324470Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49d42773-17c0-4e06-818c-4a0c759661fe name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.904126450Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d2d2295-186d-4089-959a-8d4bb16e8d42 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.904863555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259389904833964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d2d2295-186d-4089-959a-8d4bb16e8d42 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.905837786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1afa538e-5045-43bf-8ea4-df2c2be37472 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.905932103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1afa538e-5045-43bf-8ea4-df2c2be37472 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.906364155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d75a30cce700c4f1699456d16608916836a618fcd0c0306fefa1c8633081f9ef,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722259370130658881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d4387,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237260430fe494f50be342e9b03c7934c8a09ed0740301a42d31184e77897fe4,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259366304263420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321760ac6f4786a9c184d355acc68e10734bfb055d2965a0b04e423058bd7605,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259366287668428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98bb4982435163f49f078195b3ae7ba1e10894e69d74509fbba7068018ed6bc,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259366272536792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0b245a66c1a0716d5e1d33a0e8a9b079979321cf0aec1b77d72d12c4b52fb2,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259363845132807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:781af45652d87093aa92abc5b24702fe7121eec1f9bf18d958bfed9311ad6433,PodSandboxId:eede76f4fd7d3817cdcc54770a806dc612acbc1ff55937825aa304f4ac6f10bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259353814633727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259353014544583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d43
87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259353070888842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259353042555496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722259353030752321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722259353005228395,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec,PodSandboxId:288281f9786257fdd11206cf711f9da9a17513993978669c2e4aaf9f76ffd2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259317455633280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1afa538e-5045-43bf-8ea4-df2c2be37472 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.974881202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7012eba4-f576-4989-836a-9e7a1b002611 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.974977177Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7012eba4-f576-4989-836a-9e7a1b002611 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.976637597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4b0554a-e817-45db-a3b7-024aaf2d0456 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.977081638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259389977051321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4b0554a-e817-45db-a3b7-024aaf2d0456 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.977759030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b6f0e5e-3798-4c6f-9e06-b21c4592988f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.977811982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b6f0e5e-3798-4c6f-9e06-b21c4592988f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:09 pause-220574 crio[2246]: time="2024-07-29 13:23:09.978058552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d75a30cce700c4f1699456d16608916836a618fcd0c0306fefa1c8633081f9ef,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722259370130658881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d4387,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237260430fe494f50be342e9b03c7934c8a09ed0740301a42d31184e77897fe4,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259366304263420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321760ac6f4786a9c184d355acc68e10734bfb055d2965a0b04e423058bd7605,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259366287668428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98bb4982435163f49f078195b3ae7ba1e10894e69d74509fbba7068018ed6bc,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259366272536792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0b245a66c1a0716d5e1d33a0e8a9b079979321cf0aec1b77d72d12c4b52fb2,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259363845132807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:781af45652d87093aa92abc5b24702fe7121eec1f9bf18d958bfed9311ad6433,PodSandboxId:eede76f4fd7d3817cdcc54770a806dc612acbc1ff55937825aa304f4ac6f10bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259353814633727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259353014544583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d43
87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259353070888842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259353042555496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722259353030752321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722259353005228395,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec,PodSandboxId:288281f9786257fdd11206cf711f9da9a17513993978669c2e4aaf9f76ffd2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259317455633280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b6f0e5e-3798-4c6f-9e06-b21c4592988f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d75a30cce700c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 seconds ago       Running             kube-proxy                2                   aa81ddf6ea29e       kube-proxy-9x2zj
	237260430fe49       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago       Running             etcd                      2                   8de9f9c714e0e       etcd-pause-220574
	321760ac6f478       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   23 seconds ago       Running             kube-apiserver            2                   4d438f6e4d9f9       kube-apiserver-pause-220574
	b98bb49824351       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   23 seconds ago       Running             kube-controller-manager   2                   de86232cbb45e       kube-controller-manager-pause-220574
	cc0b245a66c1a       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   26 seconds ago       Running             kube-scheduler            2                   2aba4bd3c413e       kube-scheduler-pause-220574
	781af45652d87       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago       Running             coredns                   1                   eede76f4fd7d3       coredns-7db6d8ff4d-8k5vv
	8a0ee555ac5bc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   37 seconds ago       Exited              kube-scheduler            1                   2aba4bd3c413e       kube-scheduler-pause-220574
	93f1e67829cd7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   37 seconds ago       Exited              etcd                      1                   8de9f9c714e0e       etcd-pause-220574
	f06a4f49a0c5b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   37 seconds ago       Exited              kube-controller-manager   1                   de86232cbb45e       kube-controller-manager-pause-220574
	65734ff39c259       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   37 seconds ago       Exited              kube-proxy                1                   aa81ddf6ea29e       kube-proxy-9x2zj
	8eb59faf1dd3a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   37 seconds ago       Exited              kube-apiserver            1                   4d438f6e4d9f9       kube-apiserver-pause-220574
	25d7b075af9a9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   288281f978625       coredns-7db6d8ff4d-8k5vv
	
	
	==> coredns [25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54273 - 53972 "HINFO IN 6537992820329738610.4150310348957589040. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008317116s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [781af45652d87093aa92abc5b24702fe7121eec1f9bf18d958bfed9311ad6433] <==
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60331 - 41521 "HINFO IN 8959339138602030184.1223837170644759578. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010213156s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1860483542]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 13:22:34.122) (total time: 10002ms):
	Trace[1860483542]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:22:44.124)
	Trace[1860483542]: [10.002247375s] [10.002247375s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[156343312]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 13:22:34.122) (total time: 10002ms):
	Trace[156343312]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (13:22:44.124)
	Trace[156343312]: [10.002286135s] [10.002286135s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1720021442]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 13:22:34.122) (total time: 10002ms):
	Trace[1720021442]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (13:22:44.125)
	Trace[1720021442]: [10.002602046s] [10.002602046s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-220574
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-220574
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=pause-220574
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_21_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:21:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-220574
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:23:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:22:49 +0000   Mon, 29 Jul 2024 13:21:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:22:49 +0000   Mon, 29 Jul 2024 13:21:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:22:49 +0000   Mon, 29 Jul 2024 13:21:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:22:49 +0000   Mon, 29 Jul 2024 13:21:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    pause-220574
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a07adda48df4f98a28a6c045026b254
	  System UUID:                3a07adda-48df-4f98-a28a-6c045026b254
	  Boot ID:                    dc371f82-ed94-451a-a8bf-7ae7b1a4e6d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8k5vv                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     75s
	  kube-system                 etcd-pause-220574                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         89s
	  kube-system                 kube-apiserver-pause-220574             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-controller-manager-pause-220574    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-9x2zj                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-pause-220574             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 73s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 95s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s (x8 over 95s)  kubelet          Node pause-220574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x8 over 95s)  kubelet          Node pause-220574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x7 over 95s)  kubelet          Node pause-220574 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node pause-220574 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node pause-220574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     89s                kubelet          Node pause-220574 status is now: NodeHasSufficientPID
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeReady                88s                kubelet          Node pause-220574 status is now: NodeReady
	  Normal  RegisteredNode           76s                node-controller  Node pause-220574 event: Registered Node pause-220574 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-220574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-220574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-220574 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-220574 event: Registered Node pause-220574 in Controller
	
	
	==> dmesg <==
	[  +0.059336] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059446] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.216147] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.108143] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.277610] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.365726] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +0.059774] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.759401] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.578969] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.480614] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.078740] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.304310] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.142726] systemd-fstab-generator[1545]: Ignoring "noauto" option for root device
	[Jul29 13:22] systemd-fstab-generator[2164]: Ignoring "noauto" option for root device
	[  +0.088396] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.075516] systemd-fstab-generator[2177]: Ignoring "noauto" option for root device
	[  +0.163255] systemd-fstab-generator[2191]: Ignoring "noauto" option for root device
	[  +0.152841] systemd-fstab-generator[2203]: Ignoring "noauto" option for root device
	[  +0.300507] systemd-fstab-generator[2231]: Ignoring "noauto" option for root device
	[  +5.373882] systemd-fstab-generator[2358]: Ignoring "noauto" option for root device
	[  +0.069896] kauditd_printk_skb: 100 callbacks suppressed
	[  +9.207389] kauditd_printk_skb: 89 callbacks suppressed
	[  +4.140942] systemd-fstab-generator[3180]: Ignoring "noauto" option for root device
	[  +4.617516] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 13:23] systemd-fstab-generator[3520]: Ignoring "noauto" option for root device
	
	
	==> etcd [237260430fe494f50be342e9b03c7934c8a09ed0740301a42d31184e77897fe4] <==
	{"level":"info","ts":"2024-07-29T13:22:46.600687Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:22:46.600696Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:22:46.600885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 switched to configuration voters=(16921330813298615523)"}
	{"level":"info","ts":"2024-07-29T13:22:46.600976Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","added-peer-id":"ead4a4b8bd8924e3","added-peer-peer-urls":["https://192.168.39.207:2380"]}
	{"level":"info","ts":"2024-07-29T13:22:46.601062Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:22:46.601103Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:22:46.607483Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T13:22:46.607809Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.207:2380"}
	{"level":"info","ts":"2024-07-29T13:22:46.607842Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.207:2380"}
	{"level":"info","ts":"2024-07-29T13:22:46.608021Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ead4a4b8bd8924e3","initial-advertise-peer-urls":["https://192.168.39.207:2380"],"listen-peer-urls":["https://192.168.39.207:2380"],"advertise-client-urls":["https://192.168.39.207:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.207:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:22:46.609106Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:22:48.074596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T13:22:48.074659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T13:22:48.074699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 received MsgPreVoteResp from ead4a4b8bd8924e3 at term 2"}
	{"level":"info","ts":"2024-07-29T13:22:48.074723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T13:22:48.07473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 received MsgVoteResp from ead4a4b8bd8924e3 at term 3"}
	{"level":"info","ts":"2024-07-29T13:22:48.074738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T13:22:48.074745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ead4a4b8bd8924e3 elected leader ead4a4b8bd8924e3 at term 3"}
	{"level":"info","ts":"2024-07-29T13:22:48.080854Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ead4a4b8bd8924e3","local-member-attributes":"{Name:pause-220574 ClientURLs:[https://192.168.39.207:2379]}","request-path":"/0/members/ead4a4b8bd8924e3/attributes","cluster-id":"7fc3162940ce7ea7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:22:48.080903Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:22:48.081249Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:22:48.081356Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T13:22:48.081438Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:22:48.083334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.207:2379"}
	{"level":"info","ts":"2024-07-29T13:22:48.083517Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e] <==
	{"level":"info","ts":"2024-07-29T13:22:33.587689Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"28.844955ms"}
	{"level":"info","ts":"2024-07-29T13:22:33.614804Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T13:22:33.656154Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","commit-index":442}
	{"level":"info","ts":"2024-07-29T13:22:33.66566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T13:22:33.665874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became follower at term 2"}
	{"level":"info","ts":"2024-07-29T13:22:33.665904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ead4a4b8bd8924e3 [peers: [], term: 2, commit: 442, applied: 0, lastindex: 442, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T13:22:33.67209Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T13:22:33.674989Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":423}
	{"level":"info","ts":"2024-07-29T13:22:33.677191Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T13:22:33.684662Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"ead4a4b8bd8924e3","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:22:33.684933Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"ead4a4b8bd8924e3"}
	{"level":"info","ts":"2024-07-29T13:22:33.68498Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"ead4a4b8bd8924e3","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T13:22:33.685151Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:22:33.685229Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:22:33.685248Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:22:33.68546Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T13:22:33.685806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 switched to configuration voters=(16921330813298615523)"}
	{"level":"info","ts":"2024-07-29T13:22:33.685882Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","added-peer-id":"ead4a4b8bd8924e3","added-peer-peer-urls":["https://192.168.39.207:2380"]}
	{"level":"info","ts":"2024-07-29T13:22:33.685995Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:22:33.686041Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:22:33.74867Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T13:22:33.749193Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ead4a4b8bd8924e3","initial-advertise-peer-urls":["https://192.168.39.207:2380"],"listen-peer-urls":["https://192.168.39.207:2380"],"advertise-client-urls":["https://192.168.39.207:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.207:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:22:33.749301Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:22:33.749569Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.207:2380"}
	{"level":"info","ts":"2024-07-29T13:22:33.749655Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.207:2380"}
	
	
	==> kernel <==
	 13:23:10 up 2 min,  0 users,  load average: 1.16, 0.42, 0.15
	Linux pause-220574 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [321760ac6f4786a9c184d355acc68e10734bfb055d2965a0b04e423058bd7605] <==
	I0729 13:22:49.502305       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 13:22:49.502545       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:22:49.502568       1 policy_source.go:224] refreshing policies
	I0729 13:22:49.529134       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 13:22:49.530538       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 13:22:49.530612       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 13:22:49.531449       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 13:22:49.531675       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 13:22:49.531717       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 13:22:49.536599       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0729 13:22:49.539648       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 13:22:49.546170       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 13:22:49.546220       1 aggregator.go:165] initial CRD sync complete...
	I0729 13:22:49.546233       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 13:22:49.546238       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 13:22:49.546243       1 cache.go:39] Caches are synced for autoregister controller
	I0729 13:22:49.568078       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 13:22:50.336117       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 13:22:51.084472       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 13:22:51.101205       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 13:22:51.138094       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 13:22:51.172100       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 13:22:51.179258       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 13:23:01.713914       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 13:23:01.965613       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c] <==
	I0729 13:22:33.608974       1 options.go:221] external host was not specified, using 192.168.39.207
	I0729 13:22:33.612808       1 server.go:148] Version: v1.30.3
	I0729 13:22:33.612872       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0729 13:22:34.298952       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:34.299669       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 13:22:34.299863       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 13:22:34.303247       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:22:34.306572       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 13:22:34.306601       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 13:22:34.306798       1 instance.go:299] Using reconciler: lease
	W0729 13:22:34.307543       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:35.299969       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:35.300121       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:35.307909       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:36.814870       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:36.875476       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:37.131266       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:38.993828       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:39.008978       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:39.526847       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:42.404300       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:43.315344       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:43.553352       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b98bb4982435163f49f078195b3ae7ba1e10894e69d74509fbba7068018ed6bc] <==
	I0729 13:23:01.711966       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 13:23:01.714093       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 13:23:01.716406       1 shared_informer.go:320] Caches are synced for expand
	I0729 13:23:01.717671       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 13:23:01.719599       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 13:23:01.723267       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 13:23:01.725753       1 shared_informer.go:320] Caches are synced for service account
	I0729 13:23:01.729030       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 13:23:01.739632       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 13:23:01.740848       1 shared_informer.go:320] Caches are synced for namespace
	I0729 13:23:01.740936       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 13:23:01.743293       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 13:23:01.745549       1 shared_informer.go:320] Caches are synced for PV protection
	I0729 13:23:01.749071       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 13:23:01.752440       1 shared_informer.go:320] Caches are synced for disruption
	I0729 13:23:01.754826       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 13:23:01.765766       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 13:23:01.769446       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 13:23:01.801318       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 13:23:01.914499       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 13:23:01.934498       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 13:23:01.947239       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 13:23:02.381486       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 13:23:02.381585       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 13:23:02.388768       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566] <==
	
	
	==> kube-proxy [65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb] <==
	I0729 13:22:34.132592       1 server_linux.go:69] "Using iptables proxy"
	
	
	==> kube-proxy [d75a30cce700c4f1699456d16608916836a618fcd0c0306fefa1c8633081f9ef] <==
	I0729 13:22:50.265347       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:22:50.275308       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.207"]
	I0729 13:22:50.311241       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:22:50.311290       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:22:50.311311       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:22:50.314447       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:22:50.314853       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:22:50.314893       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:22:50.316542       1 config.go:192] "Starting service config controller"
	I0729 13:22:50.316595       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:22:50.316645       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:22:50.316672       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:22:50.317443       1 config.go:319] "Starting node config controller"
	I0729 13:22:50.317478       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:22:50.417520       1 shared_informer.go:320] Caches are synced for node config
	I0729 13:22:50.417610       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:22:50.417671       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6] <==
	
	
	==> kube-scheduler [cc0b245a66c1a0716d5e1d33a0e8a9b079979321cf0aec1b77d72d12c4b52fb2] <==
	W0729 13:22:49.407332       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 13:22:49.407359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 13:22:49.415553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 13:22:49.415600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 13:22:49.415669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:22:49.415703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 13:22:49.415766       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 13:22:49.415793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 13:22:49.415850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 13:22:49.415877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 13:22:49.415936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 13:22:49.415968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 13:22:49.416029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 13:22:49.416058       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 13:22:49.416109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 13:22:49.416136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 13:22:49.416185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:22:49.416212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 13:22:49.416260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 13:22:49.416287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 13:22:49.416348       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:22:49.416439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:22:49.452511       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 13:22:49.452560       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 13:22:53.076291       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.028890    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2cb59befc5cf5cf66209f443f45a9883-kubeconfig\") pod \"kube-controller-manager-pause-220574\" (UID: \"2cb59befc5cf5cf66209f443f45a9883\") " pod="kube-system/kube-controller-manager-pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.028906    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dc17ea22abaca2d068b5bdb8a70355e-ca-certs\") pod \"kube-apiserver-pause-220574\" (UID: \"9dc17ea22abaca2d068b5bdb8a70355e\") " pod="kube-system/kube-apiserver-pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.028922    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dc17ea22abaca2d068b5bdb8a70355e-k8s-certs\") pod \"kube-apiserver-pause-220574\" (UID: \"9dc17ea22abaca2d068b5bdb8a70355e\") " pod="kube-system/kube-apiserver-pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.028948    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dc17ea22abaca2d068b5bdb8a70355e-usr-share-ca-certificates\") pod \"kube-apiserver-pause-220574\" (UID: \"9dc17ea22abaca2d068b5bdb8a70355e\") " pod="kube-system/kube-apiserver-pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: E0729 13:22:46.029227    3187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-220574?timeout=10s\": dial tcp 192.168.39.207:8443: connect: connection refused" interval="400ms"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.112112    3187 kubelet_node_status.go:73] "Attempting to register node" node="pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: E0729 13:22:46.112983    3187 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.207:8443: connect: connection refused" node="pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.258805    3187 scope.go:117] "RemoveContainer" containerID="93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.259177    3187 scope.go:117] "RemoveContainer" containerID="8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.260270    3187 scope.go:117] "RemoveContainer" containerID="f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: E0729 13:22:46.430359    3187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-220574?timeout=10s\": dial tcp 192.168.39.207:8443: connect: connection refused" interval="800ms"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.514490    3187 kubelet_node_status.go:73] "Attempting to register node" node="pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: E0729 13:22:46.515328    3187 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.207:8443: connect: connection refused" node="pause-220574"
	Jul 29 13:22:47 pause-220574 kubelet[3187]: I0729 13:22:47.316676    3187 kubelet_node_status.go:73] "Attempting to register node" node="pause-220574"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.588916    3187 kubelet_node_status.go:112] "Node was previously registered" node="pause-220574"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.589482    3187 kubelet_node_status.go:76] "Successfully registered node" node="pause-220574"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.591109    3187 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.592074    3187 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.811648    3187 apiserver.go:52] "Watching apiserver"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.815531    3187 topology_manager.go:215] "Topology Admit Handler" podUID="d102922f-5f2c-4f39-9ef4-698b8a4200b2" podNamespace="kube-system" podName="kube-proxy-9x2zj"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.815886    3187 topology_manager.go:215] "Topology Admit Handler" podUID="1389db61-0ea2-41a7-bc84-b8b0a234e2d6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8k5vv"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.821548    3187 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.868345    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d102922f-5f2c-4f39-9ef4-698b8a4200b2-xtables-lock\") pod \"kube-proxy-9x2zj\" (UID: \"d102922f-5f2c-4f39-9ef4-698b8a4200b2\") " pod="kube-system/kube-proxy-9x2zj"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.868548    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d102922f-5f2c-4f39-9ef4-698b8a4200b2-lib-modules\") pod \"kube-proxy-9x2zj\" (UID: \"d102922f-5f2c-4f39-9ef4-698b8a4200b2\") " pod="kube-system/kube-proxy-9x2zj"
	Jul 29 13:22:50 pause-220574 kubelet[3187]: I0729 13:22:50.117206    3187 scope.go:117] "RemoveContainer" containerID="65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-220574 -n pause-220574
helpers_test.go:261: (dbg) Run:  kubectl --context pause-220574 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-220574 -n pause-220574
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-220574 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-220574 logs -n 25: (3.646831057s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p offline-crio-201075             | offline-crio-201075       | jenkins | v1.33.1 | 29 Jul 24 13:17 UTC | 29 Jul 24 13:18 UTC |
	| start   | -p kubernetes-upgrade-375555       | kubernetes-upgrade-375555 | jenkins | v1.33.1 | 29 Jul 24 13:18 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-265470        | force-systemd-env-265470  | jenkins | v1.33.1 | 29 Jul 24 13:18 UTC | 29 Jul 24 13:18 UTC |
	| start   | -p stopped-upgrade-938122          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 13:18 UTC | 29 Jul 24 13:19 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:18 UTC | 29 Jul 24 13:19 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-614412          | running-upgrade-614412    | jenkins | v1.33.1 | 29 Jul 24 13:19 UTC | 29 Jul 24 13:20 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:19 UTC | 29 Jul 24 13:19 UTC |
	| start   | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:19 UTC | 29 Jul 24 13:20 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-938122 stop        | minikube                  | jenkins | v1.26.0 | 29 Jul 24 13:19 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p stopped-upgrade-938122          | stopped-upgrade-938122    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-225538 sudo        | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:21 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-614412          | running-upgrade-614412    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p pause-220574 --memory=2048      | pause-220574              | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:22 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-938122          | stopped-upgrade-938122    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	| start   | -p cert-expiration-168661          | cert-expiration-168661    | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:22 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-225538 sudo        | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:21 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-225538             | NoKubernetes-225538       | jenkins | v1.33.1 | 29 Jul 24 13:21 UTC | 29 Jul 24 13:21 UTC |
	| start   | -p force-systemd-flag-454180       | force-systemd-flag-454180 | jenkins | v1.33.1 | 29 Jul 24 13:21 UTC | 29 Jul 24 13:22 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-220574                    | pause-220574              | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC | 29 Jul 24 13:23 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-454180 ssh cat  | force-systemd-flag-454180 | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC | 29 Jul 24 13:22 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-454180       | force-systemd-flag-454180 | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC | 29 Jul 24 13:22 UTC |
	| start   | -p cert-options-606292             | cert-options-606292       | jenkins | v1.33.1 | 29 Jul 24 13:22 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-375555       | kubernetes-upgrade-375555 | jenkins | v1.33.1 | 29 Jul 24 13:23 UTC |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:22:39
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:22:39.798775  284567 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:22:39.798872  284567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:22:39.798875  284567 out.go:304] Setting ErrFile to fd 2...
	I0729 13:22:39.798879  284567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:22:39.799038  284567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:22:39.799609  284567 out.go:298] Setting JSON to false
	I0729 13:22:39.800542  284567 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11103,"bootTime":1722248257,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:22:39.800598  284567 start.go:139] virtualization: kvm guest
	I0729 13:22:39.802660  284567 out.go:177] * [cert-options-606292] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:22:39.804039  284567 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:22:39.804089  284567 notify.go:220] Checking for updates...
	I0729 13:22:39.806648  284567 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:22:39.807937  284567 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:22:39.809186  284567 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:22:39.810370  284567 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:22:39.811580  284567 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:22:39.813260  284567 config.go:182] Loaded profile config "cert-expiration-168661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:22:39.813394  284567 config.go:182] Loaded profile config "kubernetes-upgrade-375555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:22:39.813569  284567 config.go:182] Loaded profile config "pause-220574": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:22:39.813664  284567 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:22:39.849876  284567 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:22:39.851086  284567 start.go:297] selected driver: kvm2
	I0729 13:22:39.851094  284567 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:22:39.851104  284567 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:22:39.851807  284567 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:22:39.851880  284567 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:22:39.867134  284567 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:22:39.867192  284567 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 13:22:39.867414  284567 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 13:22:39.867467  284567 cni.go:84] Creating CNI manager for ""
	I0729 13:22:39.867475  284567 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:22:39.867480  284567 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 13:22:39.867544  284567 start.go:340] cluster config:
	{Name:cert-options-606292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-606292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0729 13:22:39.867633  284567 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:22:39.869257  284567 out.go:177] * Starting "cert-options-606292" primary control-plane node in "cert-options-606292" cluster
	I0729 13:22:39.870270  284567 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:22:39.870296  284567 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:22:39.870313  284567 cache.go:56] Caching tarball of preloaded images
	I0729 13:22:39.870381  284567 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:22:39.870387  284567 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:22:39.870479  284567 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/config.json ...
	I0729 13:22:39.870492  284567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/config.json: {Name:mk0539b7baf5e26571cc6c10e2bd5422f0854491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:22:39.870603  284567 start.go:360] acquireMachinesLock for cert-options-606292: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:22:39.870625  284567 start.go:364] duration metric: took 14.405µs to acquireMachinesLock for "cert-options-606292"
	I0729 13:22:39.870637  284567 start.go:93] Provisioning new machine with config: &{Name:cert-options-606292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-606292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:22:39.870689  284567 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 13:22:39.872091  284567 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 13:22:39.872233  284567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:22:39.872265  284567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:22:39.886546  284567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0729 13:22:39.887011  284567 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:22:39.887534  284567 main.go:141] libmachine: Using API Version  1
	I0729 13:22:39.887549  284567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:22:39.887862  284567 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:22:39.888061  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetMachineName
	I0729 13:22:39.888202  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:22:39.888363  284567 start.go:159] libmachine.API.Create for "cert-options-606292" (driver="kvm2")
	I0729 13:22:39.888386  284567 client.go:168] LocalClient.Create starting
	I0729 13:22:39.888412  284567 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem
	I0729 13:22:39.888439  284567 main.go:141] libmachine: Decoding PEM data...
	I0729 13:22:39.888461  284567 main.go:141] libmachine: Parsing certificate...
	I0729 13:22:39.888513  284567 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem
	I0729 13:22:39.888526  284567 main.go:141] libmachine: Decoding PEM data...
	I0729 13:22:39.888533  284567 main.go:141] libmachine: Parsing certificate...
	I0729 13:22:39.888548  284567 main.go:141] libmachine: Running pre-create checks...
	I0729 13:22:39.888553  284567 main.go:141] libmachine: (cert-options-606292) Calling .PreCreateCheck
	I0729 13:22:39.888949  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetConfigRaw
	I0729 13:22:39.889322  284567 main.go:141] libmachine: Creating machine...
	I0729 13:22:39.889329  284567 main.go:141] libmachine: (cert-options-606292) Calling .Create
	I0729 13:22:39.889455  284567 main.go:141] libmachine: (cert-options-606292) Creating KVM machine...
	I0729 13:22:39.890709  284567 main.go:141] libmachine: (cert-options-606292) DBG | found existing default KVM network
	I0729 13:22:39.891970  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.891820  284590 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7a:5c:66} reservation:<nil>}
	I0729 13:22:39.892756  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.892696  284590 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:12:73:7e} reservation:<nil>}
	I0729 13:22:39.893772  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.893716  284590 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:52:3b} reservation:<nil>}
	I0729 13:22:39.896060  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.895946  284590 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 13:22:39.897287  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.897191  284590 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000404d90}
	I0729 13:22:39.897342  284567 main.go:141] libmachine: (cert-options-606292) DBG | created network xml: 
	I0729 13:22:39.897351  284567 main.go:141] libmachine: (cert-options-606292) DBG | <network>
	I0729 13:22:39.897357  284567 main.go:141] libmachine: (cert-options-606292) DBG |   <name>mk-cert-options-606292</name>
	I0729 13:22:39.897360  284567 main.go:141] libmachine: (cert-options-606292) DBG |   <dns enable='no'/>
	I0729 13:22:39.897365  284567 main.go:141] libmachine: (cert-options-606292) DBG |   
	I0729 13:22:39.897374  284567 main.go:141] libmachine: (cert-options-606292) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0729 13:22:39.897384  284567 main.go:141] libmachine: (cert-options-606292) DBG |     <dhcp>
	I0729 13:22:39.897388  284567 main.go:141] libmachine: (cert-options-606292) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0729 13:22:39.897393  284567 main.go:141] libmachine: (cert-options-606292) DBG |     </dhcp>
	I0729 13:22:39.897396  284567 main.go:141] libmachine: (cert-options-606292) DBG |   </ip>
	I0729 13:22:39.897400  284567 main.go:141] libmachine: (cert-options-606292) DBG |   
	I0729 13:22:39.897403  284567 main.go:141] libmachine: (cert-options-606292) DBG | </network>
	I0729 13:22:39.897408  284567 main.go:141] libmachine: (cert-options-606292) DBG | 
	I0729 13:22:39.902528  284567 main.go:141] libmachine: (cert-options-606292) DBG | trying to create private KVM network mk-cert-options-606292 192.168.83.0/24...
	I0729 13:22:39.969864  284567 main.go:141] libmachine: (cert-options-606292) DBG | private KVM network mk-cert-options-606292 192.168.83.0/24 created
	I0729 13:22:39.969885  284567 main.go:141] libmachine: (cert-options-606292) Setting up store path in /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292 ...
	I0729 13:22:39.969894  284567 main.go:141] libmachine: (cert-options-606292) Building disk image from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:22:39.969901  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:39.969830  284590 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:22:39.970025  284567 main.go:141] libmachine: (cert-options-606292) Downloading /home/jenkins/minikube-integration/19341-233093/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:22:40.213038  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:40.212909  284590 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa...
	I0729 13:22:40.341964  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:40.341833  284590 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/cert-options-606292.rawdisk...
	I0729 13:22:40.341986  284567 main.go:141] libmachine: (cert-options-606292) DBG | Writing magic tar header
	I0729 13:22:40.341998  284567 main.go:141] libmachine: (cert-options-606292) DBG | Writing SSH key tar header
	I0729 13:22:40.342113  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:40.341997  284590 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292 ...
	I0729 13:22:40.342148  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292
	I0729 13:22:40.342166  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines
	I0729 13:22:40.342181  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:22:40.342193  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292 (perms=drwx------)
	I0729 13:22:40.342205  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:22:40.342211  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube (perms=drwxr-xr-x)
	I0729 13:22:40.342217  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093
	I0729 13:22:40.342235  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:22:40.342240  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:22:40.342248  284567 main.go:141] libmachine: (cert-options-606292) DBG | Checking permissions on dir: /home
	I0729 13:22:40.342255  284567 main.go:141] libmachine: (cert-options-606292) DBG | Skipping /home - not owner
	I0729 13:22:40.342264  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093 (perms=drwxrwxr-x)
	I0729 13:22:40.342277  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:22:40.342284  284567 main.go:141] libmachine: (cert-options-606292) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:22:40.342315  284567 main.go:141] libmachine: (cert-options-606292) Creating domain...
	I0729 13:22:40.343381  284567 main.go:141] libmachine: (cert-options-606292) define libvirt domain using xml: 
	I0729 13:22:40.343389  284567 main.go:141] libmachine: (cert-options-606292) <domain type='kvm'>
	I0729 13:22:40.343394  284567 main.go:141] libmachine: (cert-options-606292)   <name>cert-options-606292</name>
	I0729 13:22:40.343398  284567 main.go:141] libmachine: (cert-options-606292)   <memory unit='MiB'>2048</memory>
	I0729 13:22:40.343402  284567 main.go:141] libmachine: (cert-options-606292)   <vcpu>2</vcpu>
	I0729 13:22:40.343413  284567 main.go:141] libmachine: (cert-options-606292)   <features>
	I0729 13:22:40.343418  284567 main.go:141] libmachine: (cert-options-606292)     <acpi/>
	I0729 13:22:40.343421  284567 main.go:141] libmachine: (cert-options-606292)     <apic/>
	I0729 13:22:40.343425  284567 main.go:141] libmachine: (cert-options-606292)     <pae/>
	I0729 13:22:40.343428  284567 main.go:141] libmachine: (cert-options-606292)     
	I0729 13:22:40.343433  284567 main.go:141] libmachine: (cert-options-606292)   </features>
	I0729 13:22:40.343436  284567 main.go:141] libmachine: (cert-options-606292)   <cpu mode='host-passthrough'>
	I0729 13:22:40.343440  284567 main.go:141] libmachine: (cert-options-606292)   
	I0729 13:22:40.343443  284567 main.go:141] libmachine: (cert-options-606292)   </cpu>
	I0729 13:22:40.343447  284567 main.go:141] libmachine: (cert-options-606292)   <os>
	I0729 13:22:40.343452  284567 main.go:141] libmachine: (cert-options-606292)     <type>hvm</type>
	I0729 13:22:40.343465  284567 main.go:141] libmachine: (cert-options-606292)     <boot dev='cdrom'/>
	I0729 13:22:40.343470  284567 main.go:141] libmachine: (cert-options-606292)     <boot dev='hd'/>
	I0729 13:22:40.343478  284567 main.go:141] libmachine: (cert-options-606292)     <bootmenu enable='no'/>
	I0729 13:22:40.343487  284567 main.go:141] libmachine: (cert-options-606292)   </os>
	I0729 13:22:40.343493  284567 main.go:141] libmachine: (cert-options-606292)   <devices>
	I0729 13:22:40.343498  284567 main.go:141] libmachine: (cert-options-606292)     <disk type='file' device='cdrom'>
	I0729 13:22:40.343508  284567 main.go:141] libmachine: (cert-options-606292)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/boot2docker.iso'/>
	I0729 13:22:40.343513  284567 main.go:141] libmachine: (cert-options-606292)       <target dev='hdc' bus='scsi'/>
	I0729 13:22:40.343532  284567 main.go:141] libmachine: (cert-options-606292)       <readonly/>
	I0729 13:22:40.343546  284567 main.go:141] libmachine: (cert-options-606292)     </disk>
	I0729 13:22:40.343552  284567 main.go:141] libmachine: (cert-options-606292)     <disk type='file' device='disk'>
	I0729 13:22:40.343558  284567 main.go:141] libmachine: (cert-options-606292)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:22:40.343578  284567 main.go:141] libmachine: (cert-options-606292)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/cert-options-606292.rawdisk'/>
	I0729 13:22:40.343582  284567 main.go:141] libmachine: (cert-options-606292)       <target dev='hda' bus='virtio'/>
	I0729 13:22:40.343586  284567 main.go:141] libmachine: (cert-options-606292)     </disk>
	I0729 13:22:40.343591  284567 main.go:141] libmachine: (cert-options-606292)     <interface type='network'>
	I0729 13:22:40.343596  284567 main.go:141] libmachine: (cert-options-606292)       <source network='mk-cert-options-606292'/>
	I0729 13:22:40.343602  284567 main.go:141] libmachine: (cert-options-606292)       <model type='virtio'/>
	I0729 13:22:40.343606  284567 main.go:141] libmachine: (cert-options-606292)     </interface>
	I0729 13:22:40.343609  284567 main.go:141] libmachine: (cert-options-606292)     <interface type='network'>
	I0729 13:22:40.343614  284567 main.go:141] libmachine: (cert-options-606292)       <source network='default'/>
	I0729 13:22:40.343621  284567 main.go:141] libmachine: (cert-options-606292)       <model type='virtio'/>
	I0729 13:22:40.343626  284567 main.go:141] libmachine: (cert-options-606292)     </interface>
	I0729 13:22:40.343629  284567 main.go:141] libmachine: (cert-options-606292)     <serial type='pty'>
	I0729 13:22:40.343633  284567 main.go:141] libmachine: (cert-options-606292)       <target port='0'/>
	I0729 13:22:40.343636  284567 main.go:141] libmachine: (cert-options-606292)     </serial>
	I0729 13:22:40.343644  284567 main.go:141] libmachine: (cert-options-606292)     <console type='pty'>
	I0729 13:22:40.343647  284567 main.go:141] libmachine: (cert-options-606292)       <target type='serial' port='0'/>
	I0729 13:22:40.343651  284567 main.go:141] libmachine: (cert-options-606292)     </console>
	I0729 13:22:40.343655  284567 main.go:141] libmachine: (cert-options-606292)     <rng model='virtio'>
	I0729 13:22:40.343664  284567 main.go:141] libmachine: (cert-options-606292)       <backend model='random'>/dev/random</backend>
	I0729 13:22:40.343667  284567 main.go:141] libmachine: (cert-options-606292)     </rng>
	I0729 13:22:40.343671  284567 main.go:141] libmachine: (cert-options-606292)     
	I0729 13:22:40.343674  284567 main.go:141] libmachine: (cert-options-606292)     
	I0729 13:22:40.343679  284567 main.go:141] libmachine: (cert-options-606292)   </devices>
	I0729 13:22:40.343682  284567 main.go:141] libmachine: (cert-options-606292) </domain>
	I0729 13:22:40.343705  284567 main.go:141] libmachine: (cert-options-606292) 
	I0729 13:22:40.348040  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:a9:71:c3 in network default
	I0729 13:22:40.348642  284567 main.go:141] libmachine: (cert-options-606292) Ensuring networks are active...
	I0729 13:22:40.348665  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:40.349526  284567 main.go:141] libmachine: (cert-options-606292) Ensuring network default is active
	I0729 13:22:40.349787  284567 main.go:141] libmachine: (cert-options-606292) Ensuring network mk-cert-options-606292 is active
	I0729 13:22:40.350447  284567 main.go:141] libmachine: (cert-options-606292) Getting domain xml...
	I0729 13:22:40.351198  284567 main.go:141] libmachine: (cert-options-606292) Creating domain...
	I0729 13:22:41.575568  284567 main.go:141] libmachine: (cert-options-606292) Waiting to get IP...
	I0729 13:22:41.576251  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:41.576674  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:41.576689  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:41.576643  284590 retry.go:31] will retry after 290.665118ms: waiting for machine to come up
	I0729 13:22:41.869224  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:41.869839  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:41.869895  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:41.869801  284590 retry.go:31] will retry after 296.639683ms: waiting for machine to come up
	I0729 13:22:42.168394  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:42.168857  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:42.168874  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:42.168811  284590 retry.go:31] will retry after 464.665086ms: waiting for machine to come up
	I0729 13:22:42.635134  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:42.635733  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:42.635768  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:42.635694  284590 retry.go:31] will retry after 592.412679ms: waiting for machine to come up
	I0729 13:22:43.230069  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:43.230582  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:43.230615  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:43.230511  284590 retry.go:31] will retry after 596.348698ms: waiting for machine to come up
	I0729 13:22:43.828062  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:43.828551  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:43.828574  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:43.828487  284590 retry.go:31] will retry after 614.144629ms: waiting for machine to come up
	I0729 13:22:44.444343  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:44.444926  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:44.444950  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:44.444891  284590 retry.go:31] will retry after 1.07436479s: waiting for machine to come up
	I0729 13:22:44.426950  284129 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6 93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566 65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb 8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c 25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec 3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967 bfc43f044d973833a476be05e5920229bb41232d6504fb1e74c079dc6b327409 10bddfb733011425dbb2b5f91262bcea17598f0f7b3ee05ecf38981f7f1a1923 e6882649b59f199b1721caf1dad3a96bd80350c124126315398c7ef0d630503f f9beea7ed45281df832306978552e70663db7e1a09eda35c301c3845b800095d: (10.749304512s)
	W0729 13:22:44.427055  284129 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6 93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566 65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb 8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c 25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec 3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967 bfc43f044d973833a476be05e5920229bb41232d6504fb1e74c079dc6b327409 10bddfb733011425dbb2b5f91262bcea17598f0f7b3ee05ecf38981f7f1a1923 e6882649b59f199b1721caf1dad3a96bd80350c124126315398c7ef0d630503f f9beea7ed45281df832306978552e70663db7e1a09eda35c301c3845b800095d: Process exited with status 1
	stdout:
	8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6
	93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e
	f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566
	65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb
	8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c
	25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec
	
	stderr:
	E0729 13:22:44.415732    2892 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967\": container with ID starting with 3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967 not found: ID does not exist" containerID="3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967"
	time="2024-07-29T13:22:44Z" level=fatal msg="stopping the container \"3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967\": rpc error: code = NotFound desc = could not find container \"3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967\": container with ID starting with 3530a93f8c959fff78395e0a2906c4903615b14a1fbf25bcf4150c62fe134967 not found: ID does not exist"
	I0729 13:22:44.427132  284129 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:22:44.475094  284129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:22:44.486515  284129 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Jul 29 13:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jul 29 13:21 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 29 13:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jul 29 13:21 /etc/kubernetes/scheduler.conf
	
	I0729 13:22:44.486593  284129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:22:44.497586  284129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:22:44.508180  284129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:22:44.518176  284129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:22:44.518234  284129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:22:44.533035  284129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:22:44.542626  284129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:22:44.542676  284129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:22:44.552183  284129 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:22:44.561813  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:44.630531  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:45.442990  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:45.669945  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:45.743154  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:45.520387  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:45.520954  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:45.520976  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:45.520889  284590 retry.go:31] will retry after 1.115450205s: waiting for machine to come up
	I0729 13:22:46.638169  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:46.638717  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:46.638737  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:46.638671  284590 retry.go:31] will retry after 1.484431536s: waiting for machine to come up
	I0729 13:22:48.124352  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:48.124915  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:48.124937  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:48.124868  284590 retry.go:31] will retry after 1.936812423s: waiting for machine to come up
	I0729 13:22:45.851002  284129 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:22:45.851127  284129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:22:46.351880  284129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:22:46.852195  284129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:22:46.866675  284129 api_server.go:72] duration metric: took 1.015671524s to wait for apiserver process to appear ...
	I0729 13:22:46.866708  284129 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:22:46.866733  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:22:49.382419  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:22:49.382459  284129 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:22:49.382481  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:22:49.436014  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:22:49.436052  284129 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:22:49.867224  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:22:49.880857  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:22:49.880903  284129 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:22:50.367340  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:22:50.373322  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:22:50.373350  284129 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:22:50.867362  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:22:50.871797  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 200:
	ok
	I0729 13:22:50.880804  284129 api_server.go:141] control plane version: v1.30.3
	I0729 13:22:50.880839  284129 api_server.go:131] duration metric: took 4.014121842s to wait for apiserver health ...
	I0729 13:22:50.880851  284129 cni.go:84] Creating CNI manager for ""
	I0729 13:22:50.880860  284129 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:22:50.882232  284129 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:22:50.063498  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:50.063967  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:50.063990  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:50.063946  284590 retry.go:31] will retry after 2.118498254s: waiting for machine to come up
	I0729 13:22:52.183912  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:52.184387  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:52.184409  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:52.184348  284590 retry.go:31] will retry after 3.566642473s: waiting for machine to come up
	I0729 13:22:50.883485  284129 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:22:50.899181  284129 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:22:50.924359  284129 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:22:50.936737  284129 system_pods.go:59] 6 kube-system pods found
	I0729 13:22:50.936775  284129 system_pods.go:61] "coredns-7db6d8ff4d-8k5vv" [1389db61-0ea2-41a7-bc84-b8b0a234e2d6] Running
	I0729 13:22:50.936788  284129 system_pods.go:61] "etcd-pause-220574" [5eb79bac-2629-42d3-aaa9-b43b2e52400f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:22:50.936814  284129 system_pods.go:61] "kube-apiserver-pause-220574" [247fbc96-2edb-4e03-bbbd-6426889e69b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:22:50.936825  284129 system_pods.go:61] "kube-controller-manager-pause-220574" [1d9139fe-74e9-4d9d-a2e4-03324fcd2c42] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:22:50.936834  284129 system_pods.go:61] "kube-proxy-9x2zj" [d102922f-5f2c-4f39-9ef4-698b8a4200b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:22:50.936844  284129 system_pods.go:61] "kube-scheduler-pause-220574" [d965087c-3020-4d98-8f81-022e403ae53b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:22:50.936865  284129 system_pods.go:74] duration metric: took 12.474896ms to wait for pod list to return data ...
	I0729 13:22:50.936874  284129 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:22:50.948409  284129 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:22:50.948503  284129 node_conditions.go:123] node cpu capacity is 2
	I0729 13:22:50.948529  284129 node_conditions.go:105] duration metric: took 11.64855ms to run NodePressure ...
	I0729 13:22:50.948551  284129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:22:51.244348  284129 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:22:51.248834  284129 kubeadm.go:739] kubelet initialised
	I0729 13:22:51.248856  284129 kubeadm.go:740] duration metric: took 4.477627ms waiting for restarted kubelet to initialise ...
	I0729 13:22:51.248867  284129 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:22:51.253248  284129 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace to be "Ready" ...
	I0729 13:22:51.258527  284129 pod_ready.go:92] pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace has status "Ready":"True"
	I0729 13:22:51.258552  284129 pod_ready.go:81] duration metric: took 5.277498ms for pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace to be "Ready" ...
	I0729 13:22:51.258563  284129 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:22:53.268486  284129 pod_ready.go:102] pod "etcd-pause-220574" in "kube-system" namespace has status "Ready":"False"
	I0729 13:22:55.765402  284129 pod_ready.go:102] pod "etcd-pause-220574" in "kube-system" namespace has status "Ready":"False"
	I0729 13:22:55.752878  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:55.753314  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:55.753362  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:55.753292  284590 retry.go:31] will retry after 2.761086634s: waiting for machine to come up
	I0729 13:22:58.517911  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:22:58.518379  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find current IP address of domain cert-options-606292 in network mk-cert-options-606292
	I0729 13:22:58.518401  284567 main.go:141] libmachine: (cert-options-606292) DBG | I0729 13:22:58.518325  284590 retry.go:31] will retry after 5.085557201s: waiting for machine to come up
	I0729 13:22:57.765496  284129 pod_ready.go:102] pod "etcd-pause-220574" in "kube-system" namespace has status "Ready":"False"
	I0729 13:22:59.765412  284129 pod_ready.go:92] pod "etcd-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:22:59.765439  284129 pod_ready.go:81] duration metric: took 8.50686832s for pod "etcd-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:22:59.765451  284129 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:03.606100  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.606653  284567 main.go:141] libmachine: (cert-options-606292) Found IP for machine: 192.168.83.228
	I0729 13:23:03.606822  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has current primary IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.606837  284567 main.go:141] libmachine: (cert-options-606292) Reserving static IP address...
	I0729 13:23:03.607124  284567 main.go:141] libmachine: (cert-options-606292) DBG | unable to find host DHCP lease matching {name: "cert-options-606292", mac: "52:54:00:7a:a6:d0", ip: "192.168.83.228"} in network mk-cert-options-606292
	I0729 13:23:03.681529  284567 main.go:141] libmachine: (cert-options-606292) DBG | Getting to WaitForSSH function...
	I0729 13:23:03.681553  284567 main.go:141] libmachine: (cert-options-606292) Reserved static IP address: 192.168.83.228
	I0729 13:23:03.681566  284567 main.go:141] libmachine: (cert-options-606292) Waiting for SSH to be available...
	I0729 13:23:03.684280  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.684659  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:03.684702  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.684909  284567 main.go:141] libmachine: (cert-options-606292) DBG | Using SSH client type: external
	I0729 13:23:03.684932  284567 main.go:141] libmachine: (cert-options-606292) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa (-rw-------)
	I0729 13:23:03.684958  284567 main.go:141] libmachine: (cert-options-606292) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:23:03.684971  284567 main.go:141] libmachine: (cert-options-606292) DBG | About to run SSH command:
	I0729 13:23:03.684982  284567 main.go:141] libmachine: (cert-options-606292) DBG | exit 0
	I0729 13:23:03.817148  284567 main.go:141] libmachine: (cert-options-606292) DBG | SSH cmd err, output: <nil>: 
	I0729 13:23:03.817397  284567 main.go:141] libmachine: (cert-options-606292) KVM machine creation complete!
	I0729 13:23:03.817751  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetConfigRaw
	I0729 13:23:03.818332  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:03.818497  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:03.818648  284567 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:23:03.818657  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetState
	I0729 13:23:03.819797  284567 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:23:03.819804  284567 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:23:03.819808  284567 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:23:03.819814  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:03.822337  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.822689  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:03.822726  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.822826  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:03.822960  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:03.823093  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:03.823250  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:03.823455  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:03.823644  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:03.823650  284567 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:23:03.936156  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:23:03.936172  284567 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:23:03.936178  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:03.939013  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.939398  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:03.939422  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:03.939710  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:03.939951  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:03.940105  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:03.940276  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:03.940468  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:03.940671  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:03.940679  284567 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:23:04.053840  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:23:04.053913  284567 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:23:04.053918  284567 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:23:04.053925  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetMachineName
	I0729 13:23:04.054189  284567 buildroot.go:166] provisioning hostname "cert-options-606292"
	I0729 13:23:04.054217  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetMachineName
	I0729 13:23:04.054423  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.057336  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.057730  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.057754  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.057992  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:04.058211  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.058380  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.058499  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:04.058627  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:04.058789  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:04.058795  284567 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-606292 && echo "cert-options-606292" | sudo tee /etc/hostname
	I0729 13:23:04.187606  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-606292
	
	I0729 13:23:04.187629  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.190421  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.190794  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.190812  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.190988  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:04.191216  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.191364  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.191529  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:04.191792  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:04.191978  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:04.191989  284567 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-606292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-606292/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-606292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:23:04.314260  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:23:04.314281  284567 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:23:04.314323  284567 buildroot.go:174] setting up certificates
	I0729 13:23:04.314332  284567 provision.go:84] configureAuth start
	I0729 13:23:04.314341  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetMachineName
	I0729 13:23:04.314616  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetIP
	I0729 13:23:04.318136  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.318554  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.318592  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.318774  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.320778  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.321152  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.321169  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.321312  284567 provision.go:143] copyHostCerts
	I0729 13:23:04.321367  284567 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:23:04.321398  284567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:23:04.322337  284567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:23:04.322468  284567 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:23:04.322474  284567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:23:04.322503  284567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:23:04.322552  284567 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:23:04.322555  284567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:23:04.322575  284567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:23:04.322613  284567 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.cert-options-606292 san=[127.0.0.1 192.168.83.228 cert-options-606292 localhost minikube]
	I0729 13:23:04.426488  284567 provision.go:177] copyRemoteCerts
	I0729 13:23:04.426541  284567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:23:04.426565  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.429429  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.429794  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.429811  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.429966  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:04.430187  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.430319  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:04.430457  284567 sshutil.go:53] new ssh client: &{IP:192.168.83.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa Username:docker}
	I0729 13:23:04.519182  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:23:04.555875  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 13:23:04.581388  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:23:04.606307  284567 provision.go:87] duration metric: took 291.963109ms to configureAuth
	I0729 13:23:04.606326  284567 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:23:04.606513  284567 config.go:182] Loaded profile config "cert-options-606292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:23:04.606577  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.609302  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.609697  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.609721  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.609923  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:04.610123  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.610287  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.610455  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:04.610608  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:04.610830  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:04.610847  284567 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:23:01.773090  284129 pod_ready.go:102] pod "kube-apiserver-pause-220574" in "kube-system" namespace has status "Ready":"False"
	I0729 13:23:04.271365  284129 pod_ready.go:102] pod "kube-apiserver-pause-220574" in "kube-system" namespace has status "Ready":"False"
	I0729 13:23:05.772462  284129 pod_ready.go:92] pod "kube-apiserver-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:05.772487  284129 pod_ready.go:81] duration metric: took 6.007029665s for pod "kube-apiserver-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.772497  284129 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.777416  284129 pod_ready.go:92] pod "kube-controller-manager-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:05.777438  284129 pod_ready.go:81] duration metric: took 4.93488ms for pod "kube-controller-manager-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.777453  284129 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9x2zj" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.782530  284129 pod_ready.go:92] pod "kube-proxy-9x2zj" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:05.782551  284129 pod_ready.go:81] duration metric: took 5.091325ms for pod "kube-proxy-9x2zj" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.782559  284129 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.787500  284129 pod_ready.go:92] pod "kube-scheduler-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:05.787525  284129 pod_ready.go:81] duration metric: took 4.959545ms for pod "kube-scheduler-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:05.787533  284129 pod_ready.go:38] duration metric: took 14.53865416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:23:05.787555  284129 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:23:05.802804  284129 ops.go:34] apiserver oom_adj: -16
	I0729 13:23:05.802831  284129 kubeadm.go:597] duration metric: took 32.314694038s to restartPrimaryControlPlane
	I0729 13:23:05.802846  284129 kubeadm.go:394] duration metric: took 32.6648974s to StartCluster
	I0729 13:23:05.802871  284129 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:05.802962  284129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:23:05.803901  284129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:05.804188  284129 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:23:05.804436  284129 config.go:182] Loaded profile config "pause-220574": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:23:05.804416  284129 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:23:05.806965  284129 out.go:177] * Verifying Kubernetes components...
	I0729 13:23:05.807002  284129 out.go:177] * Enabled addons: 
	I0729 13:23:05.808358  284129 addons.go:510] duration metric: took 3.941625ms for enable addons: enabled=[]
	I0729 13:23:05.808372  284129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:23:04.891655  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:23:04.891670  284567 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:23:04.891677  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetURL
	I0729 13:23:04.893058  284567 main.go:141] libmachine: (cert-options-606292) DBG | Using libvirt version 6000000
	I0729 13:23:04.895233  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.895593  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.895617  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.895755  284567 main.go:141] libmachine: Docker is up and running!
	I0729 13:23:04.895762  284567 main.go:141] libmachine: Reticulating splines...
	I0729 13:23:04.895773  284567 client.go:171] duration metric: took 25.007375238s to LocalClient.Create
	I0729 13:23:04.895799  284567 start.go:167] duration metric: took 25.007438657s to libmachine.API.Create "cert-options-606292"
	I0729 13:23:04.895816  284567 start.go:293] postStartSetup for "cert-options-606292" (driver="kvm2")
	I0729 13:23:04.895826  284567 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:23:04.895844  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:04.896059  284567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:23:04.896078  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:04.898453  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.898840  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:04.898861  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:04.898981  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:04.899179  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:04.899342  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:04.899484  284567 sshutil.go:53] new ssh client: &{IP:192.168.83.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa Username:docker}
	I0729 13:23:04.988086  284567 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:23:04.992180  284567 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:23:04.992197  284567 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:23:04.992340  284567 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:23:04.992430  284567 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:23:04.992540  284567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:23:05.002472  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:23:05.026227  284567 start.go:296] duration metric: took 130.399957ms for postStartSetup
	I0729 13:23:05.026266  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetConfigRaw
	I0729 13:23:05.026909  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetIP
	I0729 13:23:05.029747  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.030086  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:05.030106  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.030474  284567 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/config.json ...
	I0729 13:23:05.030688  284567 start.go:128] duration metric: took 25.159989393s to createHost
	I0729 13:23:05.030707  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:05.032991  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.033351  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:05.033372  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.033554  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:05.033763  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:05.033959  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:05.034120  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:05.034290  284567 main.go:141] libmachine: Using SSH client type: native
	I0729 13:23:05.034501  284567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.228 22 <nil> <nil>}
	I0729 13:23:05.034507  284567 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:23:05.154024  284567 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259385.129923257
	
	I0729 13:23:05.154040  284567 fix.go:216] guest clock: 1722259385.129923257
	I0729 13:23:05.154049  284567 fix.go:229] Guest: 2024-07-29 13:23:05.129923257 +0000 UTC Remote: 2024-07-29 13:23:05.030695472 +0000 UTC m=+25.267780610 (delta=99.227785ms)
	I0729 13:23:05.154086  284567 fix.go:200] guest clock delta is within tolerance: 99.227785ms
	I0729 13:23:05.154090  284567 start.go:83] releasing machines lock for "cert-options-606292", held for 25.283460413s
	I0729 13:23:05.154111  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:05.154392  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetIP
	I0729 13:23:05.157250  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.157670  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:05.157697  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.157875  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:05.158393  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:05.158600  284567 main.go:141] libmachine: (cert-options-606292) Calling .DriverName
	I0729 13:23:05.158705  284567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:23:05.158750  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:05.158847  284567 ssh_runner.go:195] Run: cat /version.json
	I0729 13:23:05.158866  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHHostname
	I0729 13:23:05.161512  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.161806  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.161834  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:05.161851  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.161994  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:05.162186  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:05.162245  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:05.162264  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:05.162362  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:05.162433  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHPort
	I0729 13:23:05.162515  284567 sshutil.go:53] new ssh client: &{IP:192.168.83.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa Username:docker}
	I0729 13:23:05.162604  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHKeyPath
	I0729 13:23:05.162781  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetSSHUsername
	I0729 13:23:05.162921  284567 sshutil.go:53] new ssh client: &{IP:192.168.83.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/cert-options-606292/id_rsa Username:docker}
	I0729 13:23:05.263611  284567 ssh_runner.go:195] Run: systemctl --version
	I0729 13:23:05.270028  284567 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:23:05.436114  284567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:23:05.442864  284567 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:23:05.442918  284567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:23:05.458310  284567 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:23:05.458326  284567 start.go:495] detecting cgroup driver to use...
	I0729 13:23:05.458381  284567 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:23:05.475082  284567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:23:05.488538  284567 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:23:05.488582  284567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:23:05.503454  284567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:23:05.518311  284567 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:23:05.637743  284567 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:23:05.808772  284567 docker.go:233] disabling docker service ...
	I0729 13:23:05.808835  284567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:23:05.827156  284567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:23:05.840232  284567 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:23:05.964238  284567 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:23:06.100990  284567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:23:06.115815  284567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:23:06.135787  284567 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:23:06.135840  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.145774  284567 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:23:06.145839  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.156083  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.165976  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.176463  284567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:23:06.187707  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.198557  284567 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.217006  284567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:23:06.227480  284567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:23:06.237191  284567 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:23:06.237238  284567 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:23:06.249970  284567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:23:06.260110  284567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:23:06.386423  284567 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:23:06.526332  284567 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:23:06.526414  284567 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:23:06.531262  284567 start.go:563] Will wait 60s for crictl version
	I0729 13:23:06.531319  284567 ssh_runner.go:195] Run: which crictl
	I0729 13:23:06.535090  284567 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:23:06.576748  284567 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:23:06.576865  284567 ssh_runner.go:195] Run: crio --version
	I0729 13:23:06.607446  284567 ssh_runner.go:195] Run: crio --version
	I0729 13:23:06.640321  284567 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:23:06.012698  284129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:23:06.033843  284129 node_ready.go:35] waiting up to 6m0s for node "pause-220574" to be "Ready" ...
	I0729 13:23:06.037885  284129 node_ready.go:49] node "pause-220574" has status "Ready":"True"
	I0729 13:23:06.037906  284129 node_ready.go:38] duration metric: took 4.01969ms for node "pause-220574" to be "Ready" ...
	I0729 13:23:06.037915  284129 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:23:06.045262  284129 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.170370  284129 pod_ready.go:92] pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:06.170395  284129 pod_ready.go:81] duration metric: took 125.110945ms for pod "coredns-7db6d8ff4d-8k5vv" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.170405  284129 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.570907  284129 pod_ready.go:92] pod "etcd-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:06.570941  284129 pod_ready.go:81] duration metric: took 400.52756ms for pod "etcd-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.570958  284129 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.970615  284129 pod_ready.go:92] pod "kube-apiserver-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:06.970644  284129 pod_ready.go:81] duration metric: took 399.678679ms for pod "kube-apiserver-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:06.970654  284129 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:07.371066  284129 pod_ready.go:92] pod "kube-controller-manager-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:07.371098  284129 pod_ready.go:81] duration metric: took 400.435224ms for pod "kube-controller-manager-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:07.371115  284129 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9x2zj" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:07.774353  284129 pod_ready.go:92] pod "kube-proxy-9x2zj" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:07.774425  284129 pod_ready.go:81] duration metric: took 403.294082ms for pod "kube-proxy-9x2zj" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:07.774444  284129 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:08.170349  284129 pod_ready.go:92] pod "kube-scheduler-pause-220574" in "kube-system" namespace has status "Ready":"True"
	I0729 13:23:08.170383  284129 pod_ready.go:81] duration metric: took 395.930424ms for pod "kube-scheduler-pause-220574" in "kube-system" namespace to be "Ready" ...
	I0729 13:23:08.170395  284129 pod_ready.go:38] duration metric: took 2.13246831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:23:08.170414  284129 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:23:08.170482  284129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:23:08.185160  284129 api_server.go:72] duration metric: took 2.380925489s to wait for apiserver process to appear ...
	I0729 13:23:08.185193  284129 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:23:08.185219  284129 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0729 13:23:08.189672  284129 api_server.go:279] https://192.168.39.207:8443/healthz returned 200:
	ok
	I0729 13:23:08.190666  284129 api_server.go:141] control plane version: v1.30.3
	I0729 13:23:08.190693  284129 api_server.go:131] duration metric: took 5.490899ms to wait for apiserver health ...
	I0729 13:23:08.190704  284129 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:23:08.372302  284129 system_pods.go:59] 6 kube-system pods found
	I0729 13:23:08.372334  284129 system_pods.go:61] "coredns-7db6d8ff4d-8k5vv" [1389db61-0ea2-41a7-bc84-b8b0a234e2d6] Running
	I0729 13:23:08.372338  284129 system_pods.go:61] "etcd-pause-220574" [5eb79bac-2629-42d3-aaa9-b43b2e52400f] Running
	I0729 13:23:08.372342  284129 system_pods.go:61] "kube-apiserver-pause-220574" [247fbc96-2edb-4e03-bbbd-6426889e69b2] Running
	I0729 13:23:08.372345  284129 system_pods.go:61] "kube-controller-manager-pause-220574" [1d9139fe-74e9-4d9d-a2e4-03324fcd2c42] Running
	I0729 13:23:08.372348  284129 system_pods.go:61] "kube-proxy-9x2zj" [d102922f-5f2c-4f39-9ef4-698b8a4200b2] Running
	I0729 13:23:08.372351  284129 system_pods.go:61] "kube-scheduler-pause-220574" [d965087c-3020-4d98-8f81-022e403ae53b] Running
	I0729 13:23:08.372357  284129 system_pods.go:74] duration metric: took 181.645596ms to wait for pod list to return data ...
	I0729 13:23:08.372365  284129 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:23:08.570435  284129 default_sa.go:45] found service account: "default"
	I0729 13:23:08.570476  284129 default_sa.go:55] duration metric: took 198.103554ms for default service account to be created ...
	I0729 13:23:08.570491  284129 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:23:08.774169  284129 system_pods.go:86] 6 kube-system pods found
	I0729 13:23:08.774213  284129 system_pods.go:89] "coredns-7db6d8ff4d-8k5vv" [1389db61-0ea2-41a7-bc84-b8b0a234e2d6] Running
	I0729 13:23:08.774222  284129 system_pods.go:89] "etcd-pause-220574" [5eb79bac-2629-42d3-aaa9-b43b2e52400f] Running
	I0729 13:23:08.774229  284129 system_pods.go:89] "kube-apiserver-pause-220574" [247fbc96-2edb-4e03-bbbd-6426889e69b2] Running
	I0729 13:23:08.774237  284129 system_pods.go:89] "kube-controller-manager-pause-220574" [1d9139fe-74e9-4d9d-a2e4-03324fcd2c42] Running
	I0729 13:23:08.774244  284129 system_pods.go:89] "kube-proxy-9x2zj" [d102922f-5f2c-4f39-9ef4-698b8a4200b2] Running
	I0729 13:23:08.774252  284129 system_pods.go:89] "kube-scheduler-pause-220574" [d965087c-3020-4d98-8f81-022e403ae53b] Running
	I0729 13:23:08.774262  284129 system_pods.go:126] duration metric: took 203.763933ms to wait for k8s-apps to be running ...
	I0729 13:23:08.774277  284129 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:23:08.774348  284129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:23:08.791974  284129 system_svc.go:56] duration metric: took 17.68917ms WaitForService to wait for kubelet
	I0729 13:23:08.792005  284129 kubeadm.go:582] duration metric: took 2.987777593s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:23:08.792039  284129 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:23:08.972440  284129 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:23:08.972475  284129 node_conditions.go:123] node cpu capacity is 2
	I0729 13:23:08.972491  284129 node_conditions.go:105] duration metric: took 180.445302ms to run NodePressure ...
	I0729 13:23:08.972507  284129 start.go:241] waiting for startup goroutines ...
	I0729 13:23:08.972516  284129 start.go:246] waiting for cluster config update ...
	I0729 13:23:08.972526  284129 start.go:255] writing updated cluster config ...
	I0729 13:23:08.972948  284129 ssh_runner.go:195] Run: rm -f paused
	I0729 13:23:09.024248  284129 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:23:09.026464  284129 out.go:177] * Done! kubectl is now configured to use "pause-220574" cluster and "default" namespace by default
	I0729 13:23:09.774781  280719 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:23:09.774945  280719 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 13:23:09.776879  280719 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:23:09.776935  280719 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:23:09.777019  280719 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:23:09.777136  280719 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:23:09.777248  280719 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:23:09.777326  280719 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:23:09.779524  280719 out.go:204]   - Generating certificates and keys ...
	I0729 13:23:09.779628  280719 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:23:09.779721  280719 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:23:09.779853  280719 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:23:09.779983  280719 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:23:09.780088  280719 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:23:09.780155  280719 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:23:09.780211  280719 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:23:09.780302  280719 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:23:09.780412  280719 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:23:09.780525  280719 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:23:09.780581  280719 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:23:09.780653  280719 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:23:09.780722  280719 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:23:09.780786  280719 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:23:09.780884  280719 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:23:09.780956  280719 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:23:09.781101  280719 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:23:09.781211  280719 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:23:09.781247  280719 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:23:09.781313  280719 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:23:06.641884  284567 main.go:141] libmachine: (cert-options-606292) Calling .GetIP
	I0729 13:23:06.644650  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:06.645069  284567 main.go:141] libmachine: (cert-options-606292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:a6:d0", ip: ""} in network mk-cert-options-606292: {Iface:virbr4 ExpiryTime:2024-07-29 14:22:54 +0000 UTC Type:0 Mac:52:54:00:7a:a6:d0 Iaid: IPaddr:192.168.83.228 Prefix:24 Hostname:cert-options-606292 Clientid:01:52:54:00:7a:a6:d0}
	I0729 13:23:06.645093  284567 main.go:141] libmachine: (cert-options-606292) DBG | domain cert-options-606292 has defined IP address 192.168.83.228 and MAC address 52:54:00:7a:a6:d0 in network mk-cert-options-606292
	I0729 13:23:06.645349  284567 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0729 13:23:06.649540  284567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:23:06.662577  284567 kubeadm.go:883] updating cluster {Name:cert-options-606292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.3 ClusterName:cert-options-606292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.228 Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:23:06.662711  284567 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:23:06.662756  284567 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:23:06.696959  284567 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:23:06.697023  284567 ssh_runner.go:195] Run: which lz4
	I0729 13:23:06.701506  284567 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:23:06.706559  284567 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:23:06.706595  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:23:08.092398  284567 crio.go:462] duration metric: took 1.390925269s to copy over tarball
	I0729 13:23:08.092461  284567 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:23:09.782976  280719 out.go:204]   - Booting up control plane ...
	I0729 13:23:09.783067  280719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:23:09.783147  280719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:23:09.783218  280719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:23:09.783314  280719 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:23:09.783513  280719 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:23:09.783587  280719 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:23:09.783692  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:23:09.783926  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:23:09.784002  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:23:09.784213  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:23:09.784313  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:23:09.784523  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:23:09.784624  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:23:09.784916  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:23:09.784999  280719 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:23:09.785254  280719 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:23:09.785266  280719 kubeadm.go:310] 
	I0729 13:23:09.785328  280719 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:23:09.785386  280719 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:23:09.785396  280719 kubeadm.go:310] 
	I0729 13:23:09.785465  280719 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:23:09.785513  280719 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:23:09.785647  280719 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:23:09.785659  280719 kubeadm.go:310] 
	I0729 13:23:09.785812  280719 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:23:09.785865  280719 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:23:09.785908  280719 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:23:09.785918  280719 kubeadm.go:310] 
	I0729 13:23:09.786037  280719 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:23:09.786145  280719 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:23:09.786155  280719 kubeadm.go:310] 
	I0729 13:23:09.786310  280719 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:23:09.786411  280719 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:23:09.786515  280719 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:23:09.786591  280719 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:23:09.786677  280719 kubeadm.go:310] 
	I0729 13:23:09.786679  280719 kubeadm.go:394] duration metric: took 3m55.244027085s to StartCluster
	I0729 13:23:09.786745  280719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:23:09.786818  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:23:09.842864  280719 cri.go:89] found id: ""
	I0729 13:23:09.842893  280719 logs.go:276] 0 containers: []
	W0729 13:23:09.842904  280719 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:23:09.842911  280719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:23:09.842982  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:23:09.887119  280719 cri.go:89] found id: ""
	I0729 13:23:09.887157  280719 logs.go:276] 0 containers: []
	W0729 13:23:09.887171  280719 logs.go:278] No container was found matching "etcd"
	I0729 13:23:09.887181  280719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:23:09.887253  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:23:09.936955  280719 cri.go:89] found id: ""
	I0729 13:23:09.936984  280719 logs.go:276] 0 containers: []
	W0729 13:23:09.936995  280719 logs.go:278] No container was found matching "coredns"
	I0729 13:23:09.937002  280719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:23:09.937068  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:23:09.986449  280719 cri.go:89] found id: ""
	I0729 13:23:09.986484  280719 logs.go:276] 0 containers: []
	W0729 13:23:09.986496  280719 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:23:09.986504  280719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:23:09.986575  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:23:10.035098  280719 cri.go:89] found id: ""
	I0729 13:23:10.035131  280719 logs.go:276] 0 containers: []
	W0729 13:23:10.035143  280719 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:23:10.035151  280719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:23:10.035222  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:23:10.081348  280719 cri.go:89] found id: ""
	I0729 13:23:10.081381  280719 logs.go:276] 0 containers: []
	W0729 13:23:10.081394  280719 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:23:10.081402  280719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:23:10.081467  280719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:23:10.124531  280719 cri.go:89] found id: ""
	I0729 13:23:10.124575  280719 logs.go:276] 0 containers: []
	W0729 13:23:10.124587  280719 logs.go:278] No container was found matching "kindnet"
	I0729 13:23:10.124600  280719 logs.go:123] Gathering logs for dmesg ...
	I0729 13:23:10.124617  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:23:10.162352  280719 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:23:10.162399  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:23:10.329830  280719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:23:10.329865  280719 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:23:10.329888  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:23:10.441461  280719 logs.go:123] Gathering logs for container status ...
	I0729 13:23:10.441514  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:23:10.495788  280719 logs.go:123] Gathering logs for kubelet ...
	I0729 13:23:10.495823  280719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 13:23:10.557689  280719 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 13:23:10.557746  280719 out.go:239] * 
	W0729 13:23:10.557803  280719 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:23:10.557825  280719 out.go:239] * 
	W0729 13:23:10.558736  280719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:23:10.562723  280719 out.go:177] 
	W0729 13:23:10.564091  280719 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:23:10.564168  280719 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 13:23:10.564202  280719 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 13:23:10.565735  280719 out.go:177] 
	I0729 13:23:10.500095  284567 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.407596321s)
	I0729 13:23:10.500116  284567 crio.go:469] duration metric: took 2.407698268s to extract the tarball
	I0729 13:23:10.500123  284567 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:23:10.545772  284567 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:23:10.618576  284567 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:23:10.618590  284567 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:23:10.618598  284567 kubeadm.go:934] updating node { 192.168.83.228 8555 v1.30.3 crio true true} ...
	I0729 13:23:10.618731  284567 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-606292 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:cert-options-606292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:23:10.618817  284567 ssh_runner.go:195] Run: crio config
	I0729 13:23:10.695382  284567 cni.go:84] Creating CNI manager for ""
	I0729 13:23:10.695392  284567 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:23:10.695403  284567 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:23:10.695434  284567 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.228 APIServerPort:8555 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-606292 NodeName:cert-options-606292 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:23:10.695583  284567 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.228
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-606292"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:23:10.695636  284567 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:23:10.708248  284567 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:23:10.708303  284567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:23:10.719690  284567 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0729 13:23:10.738712  284567 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:23:10.756937  284567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0729 13:23:10.776665  284567 ssh_runner.go:195] Run: grep 192.168.83.228	control-plane.minikube.internal$ /etc/hosts
	I0729 13:23:10.781154  284567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:23:10.794942  284567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:23:10.945159  284567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:23:10.975467  284567 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292 for IP: 192.168.83.228
	I0729 13:23:10.975481  284567 certs.go:194] generating shared ca certs ...
	I0729 13:23:10.975500  284567 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:10.975694  284567 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:23:10.975740  284567 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:23:10.975749  284567 certs.go:256] generating profile certs ...
	I0729 13:23:10.975818  284567 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/client.key
	I0729 13:23:10.975828  284567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/client.crt with IP's: []
	I0729 13:23:11.169521  284567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/client.crt ...
	I0729 13:23:11.169543  284567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/client.crt: {Name:mkd4c0236b96bcc20da2cf5022476d5014518f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:11.169744  284567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/client.key ...
	I0729 13:23:11.169757  284567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/client.key: {Name:mkf58934b7fa8bc5ecdd297a73a563f607e03b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:11.169868  284567 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.key.259d275b
	I0729 13:23:11.169885  284567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.crt.259d275b with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.228]
	I0729 13:23:11.378301  284567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.crt.259d275b ...
	I0729 13:23:11.378316  284567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.crt.259d275b: {Name:mk8048b36b1fef4eaa7c2aa86c8fbfbfe44832e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:11.378490  284567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.key.259d275b ...
	I0729 13:23:11.378500  284567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.key.259d275b: {Name:mk907281c5ffa8532128c0f271d71928d78088b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:11.378597  284567 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.crt.259d275b -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.crt
	I0729 13:23:11.378661  284567 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.key.259d275b -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.key
	I0729 13:23:11.378706  284567 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/proxy-client.key
	I0729 13:23:11.378716  284567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/proxy-client.crt with IP's: []
	I0729 13:23:11.447246  284567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/proxy-client.crt ...
	I0729 13:23:11.447265  284567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/proxy-client.crt: {Name:mkbfea2cad98bdf2e00013ba3a0e8860aee5ca0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:11.447437  284567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/proxy-client.key ...
	I0729 13:23:11.447446  284567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/proxy-client.key: {Name:mk86ff7914aec9ce2c9c0641a53d5e0006139f74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:23:11.447686  284567 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:23:11.447734  284567 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:23:11.447743  284567 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:23:11.447772  284567 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:23:11.447797  284567 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:23:11.447819  284567 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:23:11.447853  284567 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:23:11.448370  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:23:11.486168  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:23:11.517812  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:23:11.545657  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:23:11.572010  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I0729 13:23:11.597505  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:23:11.623781  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:23:11.653942  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/cert-options-606292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:23:11.686381  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:23:11.718581  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:23:11.747591  284567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:23:11.771890  284567 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:23:11.788671  284567 ssh_runner.go:195] Run: openssl version
	I0729 13:23:11.794805  284567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:23:11.806063  284567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:23:11.810903  284567 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:23:11.810962  284567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:23:11.817023  284567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:23:11.827876  284567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:23:11.838566  284567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:23:11.843172  284567 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:23:11.843228  284567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:23:11.848773  284567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:23:11.859059  284567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:23:11.869534  284567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:23:11.874042  284567 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:23:11.874099  284567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:23:11.879783  284567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:23:11.893757  284567 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:23:11.900327  284567 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:23:11.900374  284567 kubeadm.go:392] StartCluster: {Name:cert-options-606292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:cert-options-606292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.228 Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:23:11.900443  284567 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:23:11.900480  284567 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:23:11.952291  284567 cri.go:89] found id: ""
	I0729 13:23:11.952374  284567 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:23:11.964662  284567 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:23:11.975864  284567 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:23:11.990029  284567 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:23:11.990040  284567 kubeadm.go:157] found existing configuration files:
	
	I0729 13:23:11.990083  284567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I0729 13:23:12.002842  284567 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:23:12.002896  284567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:23:12.013254  284567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I0729 13:23:12.022596  284567 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:23:12.022653  284567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:23:12.032327  284567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I0729 13:23:12.041600  284567 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:23:12.041669  284567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:23:12.051108  284567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I0729 13:23:12.059822  284567 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:23:12.059863  284567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:23:12.069288  284567 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:23:12.188610  284567 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:23:12.188669  284567 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:23:12.315442  284567 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:23:12.315555  284567 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:23:12.315647  284567 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:23:12.522731  284567 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.648120473Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259393648096819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71cb9524-ec42-4e9f-9b87-c1bb87c34533 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.648926627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e54e6ee-9f09-4cc2-beb5-46b5a96e88d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.649033726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e54e6ee-9f09-4cc2-beb5-46b5a96e88d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.649336950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d75a30cce700c4f1699456d16608916836a618fcd0c0306fefa1c8633081f9ef,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722259370130658881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d4387,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237260430fe494f50be342e9b03c7934c8a09ed0740301a42d31184e77897fe4,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259366304263420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321760ac6f4786a9c184d355acc68e10734bfb055d2965a0b04e423058bd7605,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259366287668428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98bb4982435163f49f078195b3ae7ba1e10894e69d74509fbba7068018ed6bc,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259366272536792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0b245a66c1a0716d5e1d33a0e8a9b079979321cf0aec1b77d72d12c4b52fb2,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259363845132807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:781af45652d87093aa92abc5b24702fe7121eec1f9bf18d958bfed9311ad6433,PodSandboxId:eede76f4fd7d3817cdcc54770a806dc612acbc1ff55937825aa304f4ac6f10bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259353814633727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259353014544583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d43
87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259353070888842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259353042555496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722259353030752321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722259353005228395,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec,PodSandboxId:288281f9786257fdd11206cf711f9da9a17513993978669c2e4aaf9f76ffd2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259317455633280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e54e6ee-9f09-4cc2-beb5-46b5a96e88d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.690798661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd9099f0-2594-412c-be60-79792f3a4b32 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.690976517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd9099f0-2594-412c-be60-79792f3a4b32 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.692275572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02b55728-6ad9-46ce-82c0-a37c2114eeaa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.693434654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259393693357957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02b55728-6ad9-46ce-82c0-a37c2114eeaa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.694770238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0c07c14-825c-4315-88b2-db2ea310be2f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.694879357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0c07c14-825c-4315-88b2-db2ea310be2f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.695318691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d75a30cce700c4f1699456d16608916836a618fcd0c0306fefa1c8633081f9ef,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722259370130658881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d4387,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237260430fe494f50be342e9b03c7934c8a09ed0740301a42d31184e77897fe4,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259366304263420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321760ac6f4786a9c184d355acc68e10734bfb055d2965a0b04e423058bd7605,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259366287668428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98bb4982435163f49f078195b3ae7ba1e10894e69d74509fbba7068018ed6bc,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259366272536792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0b245a66c1a0716d5e1d33a0e8a9b079979321cf0aec1b77d72d12c4b52fb2,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259363845132807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:781af45652d87093aa92abc5b24702fe7121eec1f9bf18d958bfed9311ad6433,PodSandboxId:eede76f4fd7d3817cdcc54770a806dc612acbc1ff55937825aa304f4ac6f10bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259353814633727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259353014544583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d43
87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259353070888842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259353042555496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722259353030752321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722259353005228395,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec,PodSandboxId:288281f9786257fdd11206cf711f9da9a17513993978669c2e4aaf9f76ffd2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259317455633280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0c07c14-825c-4315-88b2-db2ea310be2f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.738351114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0150362-c0b0-43db-b27d-0ccff2247eac name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.738517845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0150362-c0b0-43db-b27d-0ccff2247eac name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.739774051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44792cae-1ed5-468b-9513-7cb9f05ebd94 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.740283143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259393740105444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44792cae-1ed5-468b-9513-7cb9f05ebd94 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.740921057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83bdb530-d70f-43ff-8456-cf2421513d6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.740971796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83bdb530-d70f-43ff-8456-cf2421513d6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.741297267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d75a30cce700c4f1699456d16608916836a618fcd0c0306fefa1c8633081f9ef,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722259370130658881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d4387,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237260430fe494f50be342e9b03c7934c8a09ed0740301a42d31184e77897fe4,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259366304263420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321760ac6f4786a9c184d355acc68e10734bfb055d2965a0b04e423058bd7605,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259366287668428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98bb4982435163f49f078195b3ae7ba1e10894e69d74509fbba7068018ed6bc,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259366272536792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0b245a66c1a0716d5e1d33a0e8a9b079979321cf0aec1b77d72d12c4b52fb2,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259363845132807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:781af45652d87093aa92abc5b24702fe7121eec1f9bf18d958bfed9311ad6433,PodSandboxId:eede76f4fd7d3817cdcc54770a806dc612acbc1ff55937825aa304f4ac6f10bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259353814633727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259353014544583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d43
87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259353070888842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259353042555496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722259353030752321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722259353005228395,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec,PodSandboxId:288281f9786257fdd11206cf711f9da9a17513993978669c2e4aaf9f76ffd2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259317455633280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83bdb530-d70f-43ff-8456-cf2421513d6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.789030046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5b5539a-42a3-4819-bd64-cca071c65f8e name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.789101377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5b5539a-42a3-4819-bd64-cca071c65f8e name=/runtime.v1.RuntimeService/Version
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.791046991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=287ac349-1b5e-41c8-9c13-f3b5e908dce3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.791701718Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259393791674831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=287ac349-1b5e-41c8-9c13-f3b5e908dce3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.792606858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5b1cdbd-697b-4043-aea9-b817cbd5551b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.792676011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5b1cdbd-697b-4043-aea9-b817cbd5551b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:23:13 pause-220574 crio[2246]: time="2024-07-29 13:23:13.792955931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d75a30cce700c4f1699456d16608916836a618fcd0c0306fefa1c8633081f9ef,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722259370130658881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d4387,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237260430fe494f50be342e9b03c7934c8a09ed0740301a42d31184e77897fe4,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259366304263420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321760ac6f4786a9c184d355acc68e10734bfb055d2965a0b04e423058bd7605,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259366287668428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98bb4982435163f49f078195b3ae7ba1e10894e69d74509fbba7068018ed6bc,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259366272536792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0b245a66c1a0716d5e1d33a0e8a9b079979321cf0aec1b77d72d12c4b52fb2,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259363845132807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:781af45652d87093aa92abc5b24702fe7121eec1f9bf18d958bfed9311ad6433,PodSandboxId:eede76f4fd7d3817cdcc54770a806dc612acbc1ff55937825aa304f4ac6f10bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259353814633727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb,PodSandboxId:aa81ddf6ea29e11c4cfa7056a8a8b813859259ad1d812b04fdaa35679deec0e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259353014544583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9x2zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d102922f-5f2c-4f39-9ef4-698b8a4200b2,},Annotations:map[string]string{io.kubernetes.container.hash: 168d43
87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6,PodSandboxId:2aba4bd3c413e68aae1946ec5adecfa89d19d2d1f68b940d2cbb3b73a535cc48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259353070888842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a25ca1ae6eb5dc4f9baa8714fd089404,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e,PodSandboxId:8de9f9c714e0eef7c96f19fbfeaa3b1613e5cff5c769b7d9490a5eab4fdfa190,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259353042555496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015b3e34b26c6c4d5abba1f6270310bc,},Annotations:map[string]string{io.kubernetes.container.hash: cd8872a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566,PodSandboxId:de86232cbb45eb4b5e0d9c4be35a9e11774b526db2fa5805d1a7c2dac490427b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722259353030752321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb59befc5cf5cf66209f443f45a9883,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c,PodSandboxId:4d438f6e4d9f9219b5788cd0c372549774a98666d78254e5705a19560a6279a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722259353005228395,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-220574,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dc17ea22abaca2d068b5bdb8a70355e,},Annotations:map[string]string{io.kubernetes.container.hash: e8061bf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec,PodSandboxId:288281f9786257fdd11206cf711f9da9a17513993978669c2e4aaf9f76ffd2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259317455633280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k5vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1389db61-0ea2-41a7-bc84-b8b0a234e2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a79cd15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5b1cdbd-697b-4043-aea9-b817cbd5551b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d75a30cce700c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   23 seconds ago       Running             kube-proxy                2                   aa81ddf6ea29e       kube-proxy-9x2zj
	237260430fe49       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago       Running             etcd                      2                   8de9f9c714e0e       etcd-pause-220574
	321760ac6f478       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   27 seconds ago       Running             kube-apiserver            2                   4d438f6e4d9f9       kube-apiserver-pause-220574
	b98bb49824351       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   27 seconds ago       Running             kube-controller-manager   2                   de86232cbb45e       kube-controller-manager-pause-220574
	cc0b245a66c1a       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   30 seconds ago       Running             kube-scheduler            2                   2aba4bd3c413e       kube-scheduler-pause-220574
	781af45652d87       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   40 seconds ago       Running             coredns                   1                   eede76f4fd7d3       coredns-7db6d8ff4d-8k5vv
	8a0ee555ac5bc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   40 seconds ago       Exited              kube-scheduler            1                   2aba4bd3c413e       kube-scheduler-pause-220574
	93f1e67829cd7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   40 seconds ago       Exited              etcd                      1                   8de9f9c714e0e       etcd-pause-220574
	f06a4f49a0c5b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   40 seconds ago       Exited              kube-controller-manager   1                   de86232cbb45e       kube-controller-manager-pause-220574
	65734ff39c259       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   40 seconds ago       Exited              kube-proxy                1                   aa81ddf6ea29e       kube-proxy-9x2zj
	8eb59faf1dd3a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   40 seconds ago       Exited              kube-apiserver            1                   4d438f6e4d9f9       kube-apiserver-pause-220574
	25d7b075af9a9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   288281f978625       coredns-7db6d8ff4d-8k5vv
	
	
	==> coredns [25d7b075af9a9d82689f954bf0183550147e55c5dc25912b8eb7d179f4c1e7ec] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54273 - 53972 "HINFO IN 6537992820329738610.4150310348957589040. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008317116s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [781af45652d87093aa92abc5b24702fe7121eec1f9bf18d958bfed9311ad6433] <==
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60331 - 41521 "HINFO IN 8959339138602030184.1223837170644759578. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010213156s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1860483542]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 13:22:34.122) (total time: 10002ms):
	Trace[1860483542]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:22:44.124)
	Trace[1860483542]: [10.002247375s] [10.002247375s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[156343312]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 13:22:34.122) (total time: 10002ms):
	Trace[156343312]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (13:22:44.124)
	Trace[156343312]: [10.002286135s] [10.002286135s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1720021442]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 13:22:34.122) (total time: 10002ms):
	Trace[1720021442]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (13:22:44.125)
	Trace[1720021442]: [10.002602046s] [10.002602046s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-220574
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-220574
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=pause-220574
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_21_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:21:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-220574
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:23:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:22:49 +0000   Mon, 29 Jul 2024 13:21:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:22:49 +0000   Mon, 29 Jul 2024 13:21:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:22:49 +0000   Mon, 29 Jul 2024 13:21:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:22:49 +0000   Mon, 29 Jul 2024 13:21:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    pause-220574
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a07adda48df4f98a28a6c045026b254
	  System UUID:                3a07adda-48df-4f98-a28a-6c045026b254
	  Boot ID:                    dc371f82-ed94-451a-a8bf-7ae7b1a4e6d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8k5vv                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     79s
	  kube-system                 etcd-pause-220574                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kube-apiserver-pause-220574             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-pause-220574    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-9x2zj                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-pause-220574             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 77s                kube-proxy       
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s (x8 over 99s)  kubelet          Node pause-220574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x8 over 99s)  kubelet          Node pause-220574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x7 over 99s)  kubelet          Node pause-220574 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    93s                kubelet          Node pause-220574 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  93s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  93s                kubelet          Node pause-220574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     93s                kubelet          Node pause-220574 status is now: NodeHasSufficientPID
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  NodeReady                92s                kubelet          Node pause-220574 status is now: NodeReady
	  Normal  RegisteredNode           80s                node-controller  Node pause-220574 event: Registered Node pause-220574 in Controller
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node pause-220574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node pause-220574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node pause-220574 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                node-controller  Node pause-220574 event: Registered Node pause-220574 in Controller
	
	
	==> dmesg <==
	[  +0.059336] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059446] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.216147] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.108143] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.277610] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.365726] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +0.059774] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.759401] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.578969] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.480614] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.078740] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.304310] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.142726] systemd-fstab-generator[1545]: Ignoring "noauto" option for root device
	[Jul29 13:22] systemd-fstab-generator[2164]: Ignoring "noauto" option for root device
	[  +0.088396] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.075516] systemd-fstab-generator[2177]: Ignoring "noauto" option for root device
	[  +0.163255] systemd-fstab-generator[2191]: Ignoring "noauto" option for root device
	[  +0.152841] systemd-fstab-generator[2203]: Ignoring "noauto" option for root device
	[  +0.300507] systemd-fstab-generator[2231]: Ignoring "noauto" option for root device
	[  +5.373882] systemd-fstab-generator[2358]: Ignoring "noauto" option for root device
	[  +0.069896] kauditd_printk_skb: 100 callbacks suppressed
	[  +9.207389] kauditd_printk_skb: 89 callbacks suppressed
	[  +4.140942] systemd-fstab-generator[3180]: Ignoring "noauto" option for root device
	[  +4.617516] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 13:23] systemd-fstab-generator[3520]: Ignoring "noauto" option for root device
	
	
	==> etcd [237260430fe494f50be342e9b03c7934c8a09ed0740301a42d31184e77897fe4] <==
	{"level":"info","ts":"2024-07-29T13:22:46.600687Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:22:46.600696Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:22:46.600885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 switched to configuration voters=(16921330813298615523)"}
	{"level":"info","ts":"2024-07-29T13:22:46.600976Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","added-peer-id":"ead4a4b8bd8924e3","added-peer-peer-urls":["https://192.168.39.207:2380"]}
	{"level":"info","ts":"2024-07-29T13:22:46.601062Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:22:46.601103Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:22:46.607483Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T13:22:46.607809Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.207:2380"}
	{"level":"info","ts":"2024-07-29T13:22:46.607842Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.207:2380"}
	{"level":"info","ts":"2024-07-29T13:22:46.608021Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ead4a4b8bd8924e3","initial-advertise-peer-urls":["https://192.168.39.207:2380"],"listen-peer-urls":["https://192.168.39.207:2380"],"advertise-client-urls":["https://192.168.39.207:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.207:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:22:46.609106Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:22:48.074596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T13:22:48.074659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T13:22:48.074699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 received MsgPreVoteResp from ead4a4b8bd8924e3 at term 2"}
	{"level":"info","ts":"2024-07-29T13:22:48.074723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T13:22:48.07473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 received MsgVoteResp from ead4a4b8bd8924e3 at term 3"}
	{"level":"info","ts":"2024-07-29T13:22:48.074738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T13:22:48.074745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ead4a4b8bd8924e3 elected leader ead4a4b8bd8924e3 at term 3"}
	{"level":"info","ts":"2024-07-29T13:22:48.080854Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ead4a4b8bd8924e3","local-member-attributes":"{Name:pause-220574 ClientURLs:[https://192.168.39.207:2379]}","request-path":"/0/members/ead4a4b8bd8924e3/attributes","cluster-id":"7fc3162940ce7ea7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:22:48.080903Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:22:48.081249Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:22:48.081356Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T13:22:48.081438Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:22:48.083334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.207:2379"}
	{"level":"info","ts":"2024-07-29T13:22:48.083517Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e] <==
	{"level":"info","ts":"2024-07-29T13:22:33.587689Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"28.844955ms"}
	{"level":"info","ts":"2024-07-29T13:22:33.614804Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T13:22:33.656154Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","commit-index":442}
	{"level":"info","ts":"2024-07-29T13:22:33.66566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T13:22:33.665874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became follower at term 2"}
	{"level":"info","ts":"2024-07-29T13:22:33.665904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ead4a4b8bd8924e3 [peers: [], term: 2, commit: 442, applied: 0, lastindex: 442, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T13:22:33.67209Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T13:22:33.674989Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":423}
	{"level":"info","ts":"2024-07-29T13:22:33.677191Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T13:22:33.684662Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"ead4a4b8bd8924e3","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:22:33.684933Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"ead4a4b8bd8924e3"}
	{"level":"info","ts":"2024-07-29T13:22:33.68498Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"ead4a4b8bd8924e3","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T13:22:33.685151Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:22:33.685229Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:22:33.685248Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T13:22:33.68546Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T13:22:33.685806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 switched to configuration voters=(16921330813298615523)"}
	{"level":"info","ts":"2024-07-29T13:22:33.685882Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","added-peer-id":"ead4a4b8bd8924e3","added-peer-peer-urls":["https://192.168.39.207:2380"]}
	{"level":"info","ts":"2024-07-29T13:22:33.685995Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:22:33.686041Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:22:33.74867Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T13:22:33.749193Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ead4a4b8bd8924e3","initial-advertise-peer-urls":["https://192.168.39.207:2380"],"listen-peer-urls":["https://192.168.39.207:2380"],"advertise-client-urls":["https://192.168.39.207:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.207:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:22:33.749301Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:22:33.749569Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.207:2380"}
	{"level":"info","ts":"2024-07-29T13:22:33.749655Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.207:2380"}
	
	
	==> kernel <==
	 13:23:14 up 2 min,  0 users,  load average: 1.47, 0.49, 0.18
	Linux pause-220574 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [321760ac6f4786a9c184d355acc68e10734bfb055d2965a0b04e423058bd7605] <==
	I0729 13:22:49.502305       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 13:22:49.502545       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:22:49.502568       1 policy_source.go:224] refreshing policies
	I0729 13:22:49.529134       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 13:22:49.530538       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 13:22:49.530612       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 13:22:49.531449       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 13:22:49.531675       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 13:22:49.531717       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 13:22:49.536599       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0729 13:22:49.539648       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 13:22:49.546170       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 13:22:49.546220       1 aggregator.go:165] initial CRD sync complete...
	I0729 13:22:49.546233       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 13:22:49.546238       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 13:22:49.546243       1 cache.go:39] Caches are synced for autoregister controller
	I0729 13:22:49.568078       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 13:22:50.336117       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 13:22:51.084472       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 13:22:51.101205       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 13:22:51.138094       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 13:22:51.172100       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 13:22:51.179258       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 13:23:01.713914       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 13:23:01.965613       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c] <==
	I0729 13:22:33.608974       1 options.go:221] external host was not specified, using 192.168.39.207
	I0729 13:22:33.612808       1 server.go:148] Version: v1.30.3
	I0729 13:22:33.612872       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0729 13:22:34.298952       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:34.299669       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 13:22:34.299863       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 13:22:34.303247       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:22:34.306572       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 13:22:34.306601       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 13:22:34.306798       1 instance.go:299] Using reconciler: lease
	W0729 13:22:34.307543       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:35.299969       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:35.300121       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:35.307909       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:36.814870       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:36.875476       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:37.131266       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:38.993828       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:39.008978       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:39.526847       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:42.404300       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:43.315344       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 13:22:43.553352       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b98bb4982435163f49f078195b3ae7ba1e10894e69d74509fbba7068018ed6bc] <==
	I0729 13:23:01.711966       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 13:23:01.714093       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 13:23:01.716406       1 shared_informer.go:320] Caches are synced for expand
	I0729 13:23:01.717671       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 13:23:01.719599       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 13:23:01.723267       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 13:23:01.725753       1 shared_informer.go:320] Caches are synced for service account
	I0729 13:23:01.729030       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 13:23:01.739632       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 13:23:01.740848       1 shared_informer.go:320] Caches are synced for namespace
	I0729 13:23:01.740936       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 13:23:01.743293       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 13:23:01.745549       1 shared_informer.go:320] Caches are synced for PV protection
	I0729 13:23:01.749071       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 13:23:01.752440       1 shared_informer.go:320] Caches are synced for disruption
	I0729 13:23:01.754826       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 13:23:01.765766       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 13:23:01.769446       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 13:23:01.801318       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 13:23:01.914499       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 13:23:01.934498       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 13:23:01.947239       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 13:23:02.381486       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 13:23:02.381585       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 13:23:02.388768       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566] <==
	
	
	==> kube-proxy [65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb] <==
	I0729 13:22:34.132592       1 server_linux.go:69] "Using iptables proxy"
	
	
	==> kube-proxy [d75a30cce700c4f1699456d16608916836a618fcd0c0306fefa1c8633081f9ef] <==
	I0729 13:22:50.265347       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:22:50.275308       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.207"]
	I0729 13:22:50.311241       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:22:50.311290       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:22:50.311311       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:22:50.314447       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:22:50.314853       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:22:50.314893       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:22:50.316542       1 config.go:192] "Starting service config controller"
	I0729 13:22:50.316595       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:22:50.316645       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:22:50.316672       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:22:50.317443       1 config.go:319] "Starting node config controller"
	I0729 13:22:50.317478       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:22:50.417520       1 shared_informer.go:320] Caches are synced for node config
	I0729 13:22:50.417610       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:22:50.417671       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8a0ee555ac5bcd3bd8c730ae656a39eebaccc75ffc78a8db5e6e1bbaf3a9b1f6] <==
	
	
	==> kube-scheduler [cc0b245a66c1a0716d5e1d33a0e8a9b079979321cf0aec1b77d72d12c4b52fb2] <==
	W0729 13:22:49.407332       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 13:22:49.407359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 13:22:49.415553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 13:22:49.415600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 13:22:49.415669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:22:49.415703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 13:22:49.415766       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 13:22:49.415793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 13:22:49.415850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 13:22:49.415877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 13:22:49.415936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 13:22:49.415968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 13:22:49.416029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 13:22:49.416058       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 13:22:49.416109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 13:22:49.416136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 13:22:49.416185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:22:49.416212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 13:22:49.416260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 13:22:49.416287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 13:22:49.416348       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:22:49.416439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:22:49.452511       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 13:22:49.452560       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 13:22:53.076291       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.028890    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2cb59befc5cf5cf66209f443f45a9883-kubeconfig\") pod \"kube-controller-manager-pause-220574\" (UID: \"2cb59befc5cf5cf66209f443f45a9883\") " pod="kube-system/kube-controller-manager-pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.028906    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dc17ea22abaca2d068b5bdb8a70355e-ca-certs\") pod \"kube-apiserver-pause-220574\" (UID: \"9dc17ea22abaca2d068b5bdb8a70355e\") " pod="kube-system/kube-apiserver-pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.028922    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dc17ea22abaca2d068b5bdb8a70355e-k8s-certs\") pod \"kube-apiserver-pause-220574\" (UID: \"9dc17ea22abaca2d068b5bdb8a70355e\") " pod="kube-system/kube-apiserver-pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.028948    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dc17ea22abaca2d068b5bdb8a70355e-usr-share-ca-certificates\") pod \"kube-apiserver-pause-220574\" (UID: \"9dc17ea22abaca2d068b5bdb8a70355e\") " pod="kube-system/kube-apiserver-pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: E0729 13:22:46.029227    3187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-220574?timeout=10s\": dial tcp 192.168.39.207:8443: connect: connection refused" interval="400ms"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.112112    3187 kubelet_node_status.go:73] "Attempting to register node" node="pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: E0729 13:22:46.112983    3187 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.207:8443: connect: connection refused" node="pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.258805    3187 scope.go:117] "RemoveContainer" containerID="93f1e67829cd792d61691aab1aa6dff52503ba4272ef36cd99216a8b95b3a42e"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.259177    3187 scope.go:117] "RemoveContainer" containerID="8eb59faf1dd3a3e31cab7db3567690d471c83a65fbfaa27b7ff0b0528c677b9c"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.260270    3187 scope.go:117] "RemoveContainer" containerID="f06a4f49a0c5ba051058141fb6a6237909cf86ee2d87553ecba8e13864043566"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: E0729 13:22:46.430359    3187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-220574?timeout=10s\": dial tcp 192.168.39.207:8443: connect: connection refused" interval="800ms"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: I0729 13:22:46.514490    3187 kubelet_node_status.go:73] "Attempting to register node" node="pause-220574"
	Jul 29 13:22:46 pause-220574 kubelet[3187]: E0729 13:22:46.515328    3187 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.207:8443: connect: connection refused" node="pause-220574"
	Jul 29 13:22:47 pause-220574 kubelet[3187]: I0729 13:22:47.316676    3187 kubelet_node_status.go:73] "Attempting to register node" node="pause-220574"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.588916    3187 kubelet_node_status.go:112] "Node was previously registered" node="pause-220574"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.589482    3187 kubelet_node_status.go:76] "Successfully registered node" node="pause-220574"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.591109    3187 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.592074    3187 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.811648    3187 apiserver.go:52] "Watching apiserver"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.815531    3187 topology_manager.go:215] "Topology Admit Handler" podUID="d102922f-5f2c-4f39-9ef4-698b8a4200b2" podNamespace="kube-system" podName="kube-proxy-9x2zj"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.815886    3187 topology_manager.go:215] "Topology Admit Handler" podUID="1389db61-0ea2-41a7-bc84-b8b0a234e2d6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8k5vv"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.821548    3187 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.868345    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d102922f-5f2c-4f39-9ef4-698b8a4200b2-xtables-lock\") pod \"kube-proxy-9x2zj\" (UID: \"d102922f-5f2c-4f39-9ef4-698b8a4200b2\") " pod="kube-system/kube-proxy-9x2zj"
	Jul 29 13:22:49 pause-220574 kubelet[3187]: I0729 13:22:49.868548    3187 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d102922f-5f2c-4f39-9ef4-698b8a4200b2-lib-modules\") pod \"kube-proxy-9x2zj\" (UID: \"d102922f-5f2c-4f39-9ef4-698b8a4200b2\") " pod="kube-system/kube-proxy-9x2zj"
	Jul 29 13:22:50 pause-220574 kubelet[3187]: I0729 13:22:50.117206    3187 scope.go:117] "RemoveContainer" containerID="65734ff39c259587b9ab49fba91854f001d23a586913895b03f67df55c5abfdb"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-220574 -n pause-220574
helpers_test.go:261: (dbg) Run:  kubectl --context pause-220574 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (74.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (284.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-924039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-924039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m44.126751196s)

                                                
                                                
-- stdout --
	* [old-k8s-version-924039] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-924039" primary control-plane node in "old-k8s-version-924039" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:27:48.310910  294229 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:27:48.311102  294229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:27:48.311116  294229 out.go:304] Setting ErrFile to fd 2...
	I0729 13:27:48.311122  294229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:27:48.311343  294229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:27:48.311922  294229 out.go:298] Setting JSON to false
	I0729 13:27:48.313114  294229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11411,"bootTime":1722248257,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:27:48.313200  294229 start.go:139] virtualization: kvm guest
	I0729 13:27:48.315994  294229 out.go:177] * [old-k8s-version-924039] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:27:48.317728  294229 notify.go:220] Checking for updates...
	I0729 13:27:48.320174  294229 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:27:48.322104  294229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:27:48.329055  294229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:27:48.330919  294229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:27:48.332278  294229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:27:48.333693  294229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:27:48.335622  294229 config.go:182] Loaded profile config "bridge-507612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:27:48.335774  294229 config.go:182] Loaded profile config "enable-default-cni-507612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:27:48.335918  294229 config.go:182] Loaded profile config "flannel-507612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:27:48.336063  294229 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:27:48.390913  294229 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:27:48.392443  294229 start.go:297] selected driver: kvm2
	I0729 13:27:48.392466  294229 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:27:48.392482  294229 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:27:48.393538  294229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:27:48.393647  294229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:27:48.416243  294229 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:27:48.416316  294229 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 13:27:48.416648  294229 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:27:48.416689  294229 cni.go:84] Creating CNI manager for ""
	I0729 13:27:48.416699  294229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:27:48.416722  294229 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 13:27:48.416818  294229 start.go:340] cluster config:
	{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:27:48.416979  294229 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:27:48.419031  294229 out.go:177] * Starting "old-k8s-version-924039" primary control-plane node in "old-k8s-version-924039" cluster
	I0729 13:27:48.420341  294229 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:27:48.420453  294229 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:27:48.420500  294229 cache.go:56] Caching tarball of preloaded images
	I0729 13:27:48.420658  294229 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:27:48.420676  294229 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 13:27:48.420826  294229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:27:48.420858  294229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json: {Name:mkec8b071dd3304f8421e5609f264e4ef472177b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:27:48.421052  294229 start.go:360] acquireMachinesLock for old-k8s-version-924039: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:27:58.809393  294229 start.go:364] duration metric: took 10.388286467s to acquireMachinesLock for "old-k8s-version-924039"
	I0729 13:27:58.809462  294229 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:27:58.809622  294229 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 13:27:58.811700  294229 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 13:27:58.811892  294229 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:27:58.811942  294229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:27:58.829741  294229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36953
	I0729 13:27:58.830211  294229 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:27:58.830730  294229 main.go:141] libmachine: Using API Version  1
	I0729 13:27:58.830750  294229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:27:58.831055  294229 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:27:58.831230  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:27:58.831365  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:27:58.831523  294229 start.go:159] libmachine.API.Create for "old-k8s-version-924039" (driver="kvm2")
	I0729 13:27:58.831551  294229 client.go:168] LocalClient.Create starting
	I0729 13:27:58.831586  294229 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem
	I0729 13:27:58.831633  294229 main.go:141] libmachine: Decoding PEM data...
	I0729 13:27:58.831656  294229 main.go:141] libmachine: Parsing certificate...
	I0729 13:27:58.831733  294229 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem
	I0729 13:27:58.831760  294229 main.go:141] libmachine: Decoding PEM data...
	I0729 13:27:58.831777  294229 main.go:141] libmachine: Parsing certificate...
	I0729 13:27:58.831810  294229 main.go:141] libmachine: Running pre-create checks...
	I0729 13:27:58.831823  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .PreCreateCheck
	I0729 13:27:58.832214  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetConfigRaw
	I0729 13:27:58.832562  294229 main.go:141] libmachine: Creating machine...
	I0729 13:27:58.832575  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .Create
	I0729 13:27:58.832712  294229 main.go:141] libmachine: (old-k8s-version-924039) Creating KVM machine...
	I0729 13:27:58.833881  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found existing default KVM network
	I0729 13:27:58.835349  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:27:58.835172  295381 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f710}
	I0729 13:27:58.835371  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | created network xml: 
	I0729 13:27:58.835384  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | <network>
	I0729 13:27:58.835395  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG |   <name>mk-old-k8s-version-924039</name>
	I0729 13:27:58.835408  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG |   <dns enable='no'/>
	I0729 13:27:58.835415  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG |   
	I0729 13:27:58.835427  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 13:27:58.835436  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG |     <dhcp>
	I0729 13:27:58.835461  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 13:27:58.835473  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG |     </dhcp>
	I0729 13:27:58.835483  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG |   </ip>
	I0729 13:27:58.835490  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG |   
	I0729 13:27:58.835502  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | </network>
	I0729 13:27:58.835508  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | 
	I0729 13:27:58.841483  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | trying to create private KVM network mk-old-k8s-version-924039 192.168.39.0/24...
	I0729 13:27:58.912110  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | private KVM network mk-old-k8s-version-924039 192.168.39.0/24 created
	I0729 13:27:58.912139  294229 main.go:141] libmachine: (old-k8s-version-924039) Setting up store path in /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039 ...
	I0729 13:27:58.912154  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:27:58.912083  295381 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:27:58.912172  294229 main.go:141] libmachine: (old-k8s-version-924039) Building disk image from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:27:58.912238  294229 main.go:141] libmachine: (old-k8s-version-924039) Downloading /home/jenkins/minikube-integration/19341-233093/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:27:59.197685  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:27:59.197544  295381 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa...
	I0729 13:27:59.392761  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:27:59.392629  295381 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/old-k8s-version-924039.rawdisk...
	I0729 13:27:59.392817  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Writing magic tar header
	I0729 13:27:59.392856  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Writing SSH key tar header
	I0729 13:27:59.392878  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:27:59.392741  295381 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039 ...
	I0729 13:27:59.392898  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039
	I0729 13:27:59.392916  294229 main.go:141] libmachine: (old-k8s-version-924039) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039 (perms=drwx------)
	I0729 13:27:59.392930  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines
	I0729 13:27:59.392948  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:27:59.392961  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093
	I0729 13:27:59.392973  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:27:59.392993  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:27:59.393005  294229 main.go:141] libmachine: (old-k8s-version-924039) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:27:59.393018  294229 main.go:141] libmachine: (old-k8s-version-924039) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube (perms=drwxr-xr-x)
	I0729 13:27:59.393024  294229 main.go:141] libmachine: (old-k8s-version-924039) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093 (perms=drwxrwxr-x)
	I0729 13:27:59.393034  294229 main.go:141] libmachine: (old-k8s-version-924039) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:27:59.393041  294229 main.go:141] libmachine: (old-k8s-version-924039) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:27:59.393049  294229 main.go:141] libmachine: (old-k8s-version-924039) Creating domain...
	I0729 13:27:59.393131  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Checking permissions on dir: /home
	I0729 13:27:59.393161  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Skipping /home - not owner
	I0729 13:27:59.394122  294229 main.go:141] libmachine: (old-k8s-version-924039) define libvirt domain using xml: 
	I0729 13:27:59.394143  294229 main.go:141] libmachine: (old-k8s-version-924039) <domain type='kvm'>
	I0729 13:27:59.394151  294229 main.go:141] libmachine: (old-k8s-version-924039)   <name>old-k8s-version-924039</name>
	I0729 13:27:59.394159  294229 main.go:141] libmachine: (old-k8s-version-924039)   <memory unit='MiB'>2200</memory>
	I0729 13:27:59.394164  294229 main.go:141] libmachine: (old-k8s-version-924039)   <vcpu>2</vcpu>
	I0729 13:27:59.394168  294229 main.go:141] libmachine: (old-k8s-version-924039)   <features>
	I0729 13:27:59.394173  294229 main.go:141] libmachine: (old-k8s-version-924039)     <acpi/>
	I0729 13:27:59.394178  294229 main.go:141] libmachine: (old-k8s-version-924039)     <apic/>
	I0729 13:27:59.394183  294229 main.go:141] libmachine: (old-k8s-version-924039)     <pae/>
	I0729 13:27:59.394189  294229 main.go:141] libmachine: (old-k8s-version-924039)     
	I0729 13:27:59.394198  294229 main.go:141] libmachine: (old-k8s-version-924039)   </features>
	I0729 13:27:59.394222  294229 main.go:141] libmachine: (old-k8s-version-924039)   <cpu mode='host-passthrough'>
	I0729 13:27:59.394236  294229 main.go:141] libmachine: (old-k8s-version-924039)   
	I0729 13:27:59.394241  294229 main.go:141] libmachine: (old-k8s-version-924039)   </cpu>
	I0729 13:27:59.394246  294229 main.go:141] libmachine: (old-k8s-version-924039)   <os>
	I0729 13:27:59.394251  294229 main.go:141] libmachine: (old-k8s-version-924039)     <type>hvm</type>
	I0729 13:27:59.394257  294229 main.go:141] libmachine: (old-k8s-version-924039)     <boot dev='cdrom'/>
	I0729 13:27:59.394262  294229 main.go:141] libmachine: (old-k8s-version-924039)     <boot dev='hd'/>
	I0729 13:27:59.394270  294229 main.go:141] libmachine: (old-k8s-version-924039)     <bootmenu enable='no'/>
	I0729 13:27:59.394275  294229 main.go:141] libmachine: (old-k8s-version-924039)   </os>
	I0729 13:27:59.394281  294229 main.go:141] libmachine: (old-k8s-version-924039)   <devices>
	I0729 13:27:59.394287  294229 main.go:141] libmachine: (old-k8s-version-924039)     <disk type='file' device='cdrom'>
	I0729 13:27:59.394295  294229 main.go:141] libmachine: (old-k8s-version-924039)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/boot2docker.iso'/>
	I0729 13:27:59.394303  294229 main.go:141] libmachine: (old-k8s-version-924039)       <target dev='hdc' bus='scsi'/>
	I0729 13:27:59.394307  294229 main.go:141] libmachine: (old-k8s-version-924039)       <readonly/>
	I0729 13:27:59.394339  294229 main.go:141] libmachine: (old-k8s-version-924039)     </disk>
	I0729 13:27:59.394362  294229 main.go:141] libmachine: (old-k8s-version-924039)     <disk type='file' device='disk'>
	I0729 13:27:59.394407  294229 main.go:141] libmachine: (old-k8s-version-924039)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:27:59.394437  294229 main.go:141] libmachine: (old-k8s-version-924039)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/old-k8s-version-924039.rawdisk'/>
	I0729 13:27:59.394452  294229 main.go:141] libmachine: (old-k8s-version-924039)       <target dev='hda' bus='virtio'/>
	I0729 13:27:59.394465  294229 main.go:141] libmachine: (old-k8s-version-924039)     </disk>
	I0729 13:27:59.394478  294229 main.go:141] libmachine: (old-k8s-version-924039)     <interface type='network'>
	I0729 13:27:59.394489  294229 main.go:141] libmachine: (old-k8s-version-924039)       <source network='mk-old-k8s-version-924039'/>
	I0729 13:27:59.394499  294229 main.go:141] libmachine: (old-k8s-version-924039)       <model type='virtio'/>
	I0729 13:27:59.394509  294229 main.go:141] libmachine: (old-k8s-version-924039)     </interface>
	I0729 13:27:59.394523  294229 main.go:141] libmachine: (old-k8s-version-924039)     <interface type='network'>
	I0729 13:27:59.394539  294229 main.go:141] libmachine: (old-k8s-version-924039)       <source network='default'/>
	I0729 13:27:59.394551  294229 main.go:141] libmachine: (old-k8s-version-924039)       <model type='virtio'/>
	I0729 13:27:59.394558  294229 main.go:141] libmachine: (old-k8s-version-924039)     </interface>
	I0729 13:27:59.394571  294229 main.go:141] libmachine: (old-k8s-version-924039)     <serial type='pty'>
	I0729 13:27:59.394581  294229 main.go:141] libmachine: (old-k8s-version-924039)       <target port='0'/>
	I0729 13:27:59.394593  294229 main.go:141] libmachine: (old-k8s-version-924039)     </serial>
	I0729 13:27:59.394603  294229 main.go:141] libmachine: (old-k8s-version-924039)     <console type='pty'>
	I0729 13:27:59.394615  294229 main.go:141] libmachine: (old-k8s-version-924039)       <target type='serial' port='0'/>
	I0729 13:27:59.394628  294229 main.go:141] libmachine: (old-k8s-version-924039)     </console>
	I0729 13:27:59.394641  294229 main.go:141] libmachine: (old-k8s-version-924039)     <rng model='virtio'>
	I0729 13:27:59.394652  294229 main.go:141] libmachine: (old-k8s-version-924039)       <backend model='random'>/dev/random</backend>
	I0729 13:27:59.394660  294229 main.go:141] libmachine: (old-k8s-version-924039)     </rng>
	I0729 13:27:59.394669  294229 main.go:141] libmachine: (old-k8s-version-924039)     
	I0729 13:27:59.394678  294229 main.go:141] libmachine: (old-k8s-version-924039)     
	I0729 13:27:59.394688  294229 main.go:141] libmachine: (old-k8s-version-924039)   </devices>
	I0729 13:27:59.394703  294229 main.go:141] libmachine: (old-k8s-version-924039) </domain>
	I0729 13:27:59.394720  294229 main.go:141] libmachine: (old-k8s-version-924039) 
	I0729 13:27:59.398771  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:31:78:01 in network default
	I0729 13:27:59.399365  294229 main.go:141] libmachine: (old-k8s-version-924039) Ensuring networks are active...
	I0729 13:27:59.399404  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:27:59.400092  294229 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network default is active
	I0729 13:27:59.400431  294229 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network mk-old-k8s-version-924039 is active
	I0729 13:27:59.401087  294229 main.go:141] libmachine: (old-k8s-version-924039) Getting domain xml...
	I0729 13:27:59.401932  294229 main.go:141] libmachine: (old-k8s-version-924039) Creating domain...
	I0729 13:28:00.757543  294229 main.go:141] libmachine: (old-k8s-version-924039) Waiting to get IP...
	I0729 13:28:00.758590  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:00.759115  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:00.759145  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:00.759091  295381 retry.go:31] will retry after 208.340643ms: waiting for machine to come up
	I0729 13:28:00.969541  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:00.970162  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:00.970193  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:00.970122  295381 retry.go:31] will retry after 372.774032ms: waiting for machine to come up
	I0729 13:28:01.344973  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:01.345538  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:01.345567  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:01.345494  295381 retry.go:31] will retry after 374.09359ms: waiting for machine to come up
	I0729 13:28:01.720917  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:01.721574  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:01.721601  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:01.721528  295381 retry.go:31] will retry after 424.460149ms: waiting for machine to come up
	I0729 13:28:02.147234  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:02.147779  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:02.147837  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:02.147776  295381 retry.go:31] will retry after 636.861783ms: waiting for machine to come up
	I0729 13:28:02.786648  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:02.787289  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:02.787311  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:02.787232  295381 retry.go:31] will retry after 770.545394ms: waiting for machine to come up
	I0729 13:28:03.558887  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:03.559410  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:03.559442  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:03.559356  295381 retry.go:31] will retry after 885.358805ms: waiting for machine to come up
	I0729 13:28:04.446804  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:04.447232  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:04.447259  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:04.447199  295381 retry.go:31] will retry after 1.291606284s: waiting for machine to come up
	I0729 13:28:05.740954  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:05.741547  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:05.741588  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:05.741488  295381 retry.go:31] will retry after 1.549427553s: waiting for machine to come up
	I0729 13:28:07.293213  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:07.293753  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:07.293783  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:07.293697  295381 retry.go:31] will retry after 1.830965968s: waiting for machine to come up
	I0729 13:28:09.126903  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:09.127354  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:09.127379  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:09.127305  295381 retry.go:31] will retry after 2.412656188s: waiting for machine to come up
	I0729 13:28:11.542089  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:11.542669  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:11.542689  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:11.542612  295381 retry.go:31] will retry after 2.864833521s: waiting for machine to come up
	I0729 13:28:14.409651  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:14.410140  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:14.410173  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:14.410077  295381 retry.go:31] will retry after 4.345448721s: waiting for machine to come up
	I0729 13:28:18.758216  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:18.758726  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:28:18.758781  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:28:18.758651  295381 retry.go:31] will retry after 5.311972247s: waiting for machine to come up
	I0729 13:28:24.072547  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.073101  294229 main.go:141] libmachine: (old-k8s-version-924039) Found IP for machine: 192.168.39.227
	I0729 13:28:24.073124  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has current primary IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.073130  294229 main.go:141] libmachine: (old-k8s-version-924039) Reserving static IP address...
	I0729 13:28:24.073411  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"} in network mk-old-k8s-version-924039
	I0729 13:28:24.152465  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Getting to WaitForSSH function...
	I0729 13:28:24.152493  294229 main.go:141] libmachine: (old-k8s-version-924039) Reserved static IP address: 192.168.39.227
	I0729 13:28:24.152506  294229 main.go:141] libmachine: (old-k8s-version-924039) Waiting for SSH to be available...
	I0729 13:28:24.155648  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.156100  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:24.156133  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.156322  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH client type: external
	I0729 13:28:24.156346  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa (-rw-------)
	I0729 13:28:24.156382  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:28:24.156396  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | About to run SSH command:
	I0729 13:28:24.156446  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | exit 0
	I0729 13:28:24.280926  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | SSH cmd err, output: <nil>: 
	I0729 13:28:24.281192  294229 main.go:141] libmachine: (old-k8s-version-924039) KVM machine creation complete!
	I0729 13:28:24.281552  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetConfigRaw
	I0729 13:28:24.282114  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:28:24.282314  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:28:24.282531  294229 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:28:24.282546  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetState
	I0729 13:28:24.283846  294229 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:28:24.283863  294229 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:28:24.283870  294229 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:28:24.283876  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:24.286502  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.286913  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:24.286948  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.287125  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:28:24.287319  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:24.287498  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:24.287646  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:28:24.287844  294229 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:24.288034  294229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:28:24.288050  294229 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:28:24.388153  294229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:28:24.388177  294229 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:28:24.388185  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:24.391227  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.391647  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:24.391669  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.391857  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:28:24.392094  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:24.392292  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:24.392473  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:28:24.392643  294229 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:24.392871  294229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:28:24.392884  294229 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:28:24.501710  294229 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:28:24.501821  294229 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:28:24.501842  294229 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:28:24.501863  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:28:24.502162  294229 buildroot.go:166] provisioning hostname "old-k8s-version-924039"
	I0729 13:28:24.502190  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:28:24.502408  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:24.505209  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.505603  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:24.505641  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.505774  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:28:24.505957  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:24.506136  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:24.506257  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:28:24.506429  294229 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:24.506639  294229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:28:24.506658  294229 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-924039 && echo "old-k8s-version-924039" | sudo tee /etc/hostname
	I0729 13:28:24.625041  294229 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-924039
	
	I0729 13:28:24.625086  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:24.628160  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.628485  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:24.628546  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.628756  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:28:24.628976  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:24.629165  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:24.629306  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:28:24.629493  294229 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:24.629709  294229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:28:24.629742  294229 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-924039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-924039/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-924039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:28:24.743759  294229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:28:24.743826  294229 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:28:24.743867  294229 buildroot.go:174] setting up certificates
	I0729 13:28:24.743881  294229 provision.go:84] configureAuth start
	I0729 13:28:24.743901  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:28:24.744226  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:28:24.747164  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.747579  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:24.747609  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.747775  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:24.750604  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.751062  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:24.751105  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.751265  294229 provision.go:143] copyHostCerts
	I0729 13:28:24.751328  294229 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:28:24.751344  294229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:28:24.751410  294229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:28:24.751564  294229 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:28:24.751579  294229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:28:24.751610  294229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:28:24.751704  294229 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:28:24.751714  294229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:28:24.751741  294229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:28:24.751827  294229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-924039 san=[127.0.0.1 192.168.39.227 localhost minikube old-k8s-version-924039]
	I0729 13:28:24.844316  294229 provision.go:177] copyRemoteCerts
	I0729 13:28:24.844386  294229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:28:24.844410  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:24.847248  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.847587  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:24.847627  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:24.847784  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:28:24.848007  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:24.848176  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:28:24.848308  294229 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:28:24.931979  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 13:28:24.957278  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:28:24.981120  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:28:25.005177  294229 provision.go:87] duration metric: took 261.276612ms to configureAuth
	I0729 13:28:25.005207  294229 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:28:25.005367  294229 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:28:25.005446  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:25.008350  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.008715  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:25.008753  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.009026  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:28:25.009259  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:25.009455  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:25.009625  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:28:25.009801  294229 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:25.010019  294229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:28:25.010040  294229 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:28:25.282287  294229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:28:25.282317  294229 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:28:25.282329  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetURL
	I0729 13:28:25.283654  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using libvirt version 6000000
	I0729 13:28:25.286216  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.286650  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:25.286691  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.286882  294229 main.go:141] libmachine: Docker is up and running!
	I0729 13:28:25.286902  294229 main.go:141] libmachine: Reticulating splines...
	I0729 13:28:25.286910  294229 client.go:171] duration metric: took 26.455348408s to LocalClient.Create
	I0729 13:28:25.286934  294229 start.go:167] duration metric: took 26.455412023s to libmachine.API.Create "old-k8s-version-924039"
	I0729 13:28:25.286948  294229 start.go:293] postStartSetup for "old-k8s-version-924039" (driver="kvm2")
	I0729 13:28:25.286966  294229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:28:25.286986  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:28:25.287250  294229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:28:25.287332  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:25.290003  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.290358  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:25.290390  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.290557  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:28:25.290763  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:25.290968  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:28:25.291129  294229 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:28:25.375239  294229 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:28:25.379379  294229 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:28:25.379404  294229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:28:25.379467  294229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:28:25.379536  294229 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:28:25.379619  294229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:28:25.388646  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:28:25.414459  294229 start.go:296] duration metric: took 127.490748ms for postStartSetup
	I0729 13:28:25.414507  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetConfigRaw
	I0729 13:28:25.415248  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:28:25.418291  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.418730  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:25.418751  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.419041  294229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:28:25.419403  294229 start.go:128] duration metric: took 26.609765239s to createHost
	I0729 13:28:25.419434  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:25.421974  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.422307  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:25.422336  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.422473  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:28:25.422680  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:25.422891  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:25.423093  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:28:25.423274  294229 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:25.423447  294229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:28:25.423466  294229 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 13:28:25.533682  294229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259705.485181797
	
	I0729 13:28:25.533710  294229 fix.go:216] guest clock: 1722259705.485181797
	I0729 13:28:25.533717  294229 fix.go:229] Guest: 2024-07-29 13:28:25.485181797 +0000 UTC Remote: 2024-07-29 13:28:25.419419231 +0000 UTC m=+37.185893936 (delta=65.762566ms)
	I0729 13:28:25.533756  294229 fix.go:200] guest clock delta is within tolerance: 65.762566ms
	I0729 13:28:25.533764  294229 start.go:83] releasing machines lock for "old-k8s-version-924039", held for 26.724338685s
	I0729 13:28:25.533807  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:28:25.534087  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:28:25.537091  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.537480  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:25.537498  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.537645  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:28:25.538227  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:28:25.538408  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:28:25.538519  294229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:28:25.538561  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:25.538658  294229 ssh_runner.go:195] Run: cat /version.json
	I0729 13:28:25.538679  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:28:25.542325  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.542598  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.542743  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:25.542775  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.542960  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:28:25.543106  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:25.543130  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:25.543142  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:25.543320  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:28:25.543354  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:28:25.543538  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:28:25.543551  294229 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:28:25.543698  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:28:25.543873  294229 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:28:25.661192  294229 ssh_runner.go:195] Run: systemctl --version
	I0729 13:28:25.668975  294229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:28:25.838935  294229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:28:25.845115  294229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:28:25.845213  294229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:28:25.862481  294229 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:28:25.862511  294229 start.go:495] detecting cgroup driver to use...
	I0729 13:28:25.862591  294229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:28:25.879372  294229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:28:25.894580  294229 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:28:25.894730  294229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:28:25.910097  294229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:28:25.925505  294229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:28:26.061754  294229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:28:26.236879  294229 docker.go:233] disabling docker service ...
	I0729 13:28:26.236960  294229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:28:26.258214  294229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:28:26.276183  294229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:28:26.421002  294229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:28:26.559913  294229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:28:26.575379  294229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:28:26.596858  294229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 13:28:26.596920  294229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:26.608253  294229 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:28:26.608327  294229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:26.619458  294229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:26.630484  294229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:26.640515  294229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:28:26.650788  294229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:28:26.660100  294229 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:28:26.660159  294229 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:28:26.678544  294229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:28:26.688898  294229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:28:26.836821  294229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:28:26.985855  294229 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:28:26.985926  294229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:28:26.991278  294229 start.go:563] Will wait 60s for crictl version
	I0729 13:28:26.991337  294229 ssh_runner.go:195] Run: which crictl
	I0729 13:28:26.995107  294229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:28:27.038823  294229 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:28:27.038936  294229 ssh_runner.go:195] Run: crio --version
	I0729 13:28:27.069845  294229 ssh_runner.go:195] Run: crio --version
	I0729 13:28:27.101921  294229 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 13:28:27.103200  294229 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:28:27.106944  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:27.107371  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:14 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:28:27.107402  294229 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:28:27.107659  294229 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:28:27.112360  294229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:28:27.125937  294229 kubeadm.go:883] updating cluster {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:28:27.126111  294229 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:28:27.126183  294229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:28:27.165400  294229 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:28:27.165464  294229 ssh_runner.go:195] Run: which lz4
	I0729 13:28:27.170126  294229 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 13:28:27.174780  294229 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:28:27.174817  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 13:28:28.980065  294229 crio.go:462] duration metric: took 1.809981394s to copy over tarball
	I0729 13:28:28.980144  294229 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:28:32.012221  294229 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.032039935s)
	I0729 13:28:32.012251  294229 crio.go:469] duration metric: took 3.032158679s to extract the tarball
	I0729 13:28:32.012259  294229 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:28:32.082096  294229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:28:32.137238  294229 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:28:32.137267  294229 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:28:32.137322  294229 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:28:32.137620  294229 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:28:32.137975  294229 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 13:28:32.138003  294229 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:28:32.138022  294229 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:28:32.138046  294229 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:28:32.138081  294229 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 13:28:32.138215  294229 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:28:32.139693  294229 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 13:28:32.139690  294229 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 13:28:32.139899  294229 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:28:32.139972  294229 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:28:32.140053  294229 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:28:32.140088  294229 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:28:32.140527  294229 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:28:32.140585  294229 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:28:32.311894  294229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:28:32.311944  294229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 13:28:32.315233  294229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:28:32.329323  294229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:28:32.331561  294229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:28:32.351658  294229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 13:28:32.376175  294229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 13:28:32.490718  294229 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 13:28:32.490777  294229 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:28:32.490827  294229 ssh_runner.go:195] Run: which crictl
	I0729 13:28:32.491100  294229 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 13:28:32.491136  294229 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:28:32.491170  294229 ssh_runner.go:195] Run: which crictl
	I0729 13:28:32.525071  294229 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 13:28:32.525122  294229 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:28:32.525166  294229 ssh_runner.go:195] Run: which crictl
	I0729 13:28:32.543509  294229 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 13:28:32.543559  294229 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:28:32.543613  294229 ssh_runner.go:195] Run: which crictl
	I0729 13:28:32.550132  294229 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 13:28:32.550189  294229 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:28:32.550272  294229 ssh_runner.go:195] Run: which crictl
	I0729 13:28:32.555417  294229 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 13:28:32.555466  294229 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 13:28:32.555515  294229 ssh_runner.go:195] Run: which crictl
	I0729 13:28:32.555620  294229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:28:32.555692  294229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 13:28:32.555758  294229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:28:32.555771  294229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:28:32.555770  294229 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 13:28:32.555841  294229 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 13:28:32.555862  294229 ssh_runner.go:195] Run: which crictl
	I0729 13:28:32.557968  294229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:28:32.581092  294229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 13:28:32.719456  294229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 13:28:32.728631  294229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 13:28:32.728698  294229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 13:28:32.728718  294229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 13:28:32.728821  294229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 13:28:32.728877  294229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 13:28:32.728930  294229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 13:28:32.773613  294229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 13:28:34.046332  294229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:28:34.213236  294229 cache_images.go:92] duration metric: took 2.075947548s to LoadCachedImages
	W0729 13:28:34.213349  294229 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0729 13:28:34.213365  294229 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.20.0 crio true true} ...
	I0729 13:28:34.213469  294229 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-924039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:28:34.213533  294229 ssh_runner.go:195] Run: crio config
	I0729 13:28:34.277108  294229 cni.go:84] Creating CNI manager for ""
	I0729 13:28:34.277139  294229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:28:34.277153  294229 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:28:34.277178  294229 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-924039 NodeName:old-k8s-version-924039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 13:28:34.277365  294229 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-924039"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:28:34.277441  294229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 13:28:34.292190  294229 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:28:34.292279  294229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:28:34.307713  294229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0729 13:28:34.331459  294229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:28:34.353727  294229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 13:28:34.375107  294229 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0729 13:28:34.379592  294229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:28:34.397638  294229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:28:34.542986  294229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:28:34.562437  294229 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039 for IP: 192.168.39.227
	I0729 13:28:34.562467  294229 certs.go:194] generating shared ca certs ...
	I0729 13:28:34.562489  294229 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:34.562658  294229 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:28:34.562709  294229 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:28:34.562721  294229 certs.go:256] generating profile certs ...
	I0729 13:28:34.562809  294229 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.key
	I0729 13:28:34.562824  294229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.crt with IP's: []
	I0729 13:28:34.823019  294229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.crt ...
	I0729 13:28:34.823051  294229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.crt: {Name:mk3d75b8eba5fc31f9ef9187cd6daa66185190d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:34.823233  294229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.key ...
	I0729 13:28:34.823245  294229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.key: {Name:mke4b83fca8df60d9c368713016634f90536a303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:34.823323  294229 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b
	I0729 13:28:34.823335  294229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt.4e51fa9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227]
	I0729 13:28:35.129849  294229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt.4e51fa9b ...
	I0729 13:28:35.129889  294229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt.4e51fa9b: {Name:mkba86ddf03805d3321d615263feb3457f1a8408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:35.130089  294229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b ...
	I0729 13:28:35.130120  294229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b: {Name:mk962aa451be20f8d99d9113e95bbc1a8afa8fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:35.130244  294229 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt.4e51fa9b -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt
	I0729 13:28:35.130364  294229 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key
	I0729 13:28:35.130455  294229 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key
	I0729 13:28:35.130487  294229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt with IP's: []
	I0729 13:28:35.397978  294229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt ...
	I0729 13:28:35.398019  294229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt: {Name:mk394815e88611b66e0d8c69491ca4ae194674c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:35.398226  294229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key ...
	I0729 13:28:35.398245  294229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key: {Name:mk743f2782fd3d435fe381604d4b8c38410606d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:35.398461  294229 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:28:35.398509  294229 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:28:35.398523  294229 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:28:35.398550  294229 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:28:35.398581  294229 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:28:35.398610  294229 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:28:35.398658  294229 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:28:35.399544  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:28:35.434251  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:28:35.462135  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:28:35.496778  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:28:35.533377  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 13:28:35.583146  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:28:35.629805  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:28:35.664295  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:28:35.714609  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:28:35.762109  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:28:35.798613  294229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:28:35.832713  294229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:28:35.852515  294229 ssh_runner.go:195] Run: openssl version
	I0729 13:28:35.858766  294229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:28:35.876184  294229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:28:35.882470  294229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:28:35.882546  294229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:28:35.890118  294229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:28:35.903594  294229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:28:35.917594  294229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:35.922608  294229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:35.922667  294229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:35.928568  294229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:28:35.941145  294229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:28:35.952723  294229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:28:35.957745  294229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:28:35.957811  294229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:28:35.963868  294229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:28:35.977183  294229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:28:35.982777  294229 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:28:35.982853  294229 kubeadm.go:392] StartCluster: {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:28:35.982953  294229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:28:35.983010  294229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:28:36.029230  294229 cri.go:89] found id: ""
	I0729 13:28:36.029289  294229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:28:36.042355  294229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:28:36.052966  294229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:28:36.069468  294229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:28:36.069493  294229 kubeadm.go:157] found existing configuration files:
	
	I0729 13:28:36.069543  294229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:28:36.083756  294229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:28:36.083815  294229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:28:36.098572  294229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:28:36.115481  294229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:28:36.115533  294229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:28:36.130591  294229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:28:36.142120  294229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:28:36.142192  294229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:28:36.152437  294229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:28:36.162582  294229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:28:36.162650  294229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:28:36.173347  294229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:28:36.336273  294229 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:28:36.336357  294229 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:28:36.528502  294229 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:28:36.528654  294229 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:28:36.528817  294229 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:28:36.763343  294229 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:28:36.765736  294229 out.go:204]   - Generating certificates and keys ...
	I0729 13:28:36.765840  294229 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:28:36.765941  294229 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:28:37.013190  294229 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 13:28:37.141361  294229 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 13:28:37.208555  294229 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 13:28:37.315857  294229 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 13:28:37.549819  294229 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 13:28:37.550089  294229 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-924039] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0729 13:28:37.658857  294229 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 13:28:37.659037  294229 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-924039] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0729 13:28:37.939524  294229 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 13:28:38.373828  294229 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 13:28:38.456123  294229 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 13:28:38.456375  294229 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:28:38.577037  294229 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:28:38.903788  294229 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:28:39.017861  294229 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:28:39.452889  294229 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:28:39.488569  294229 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:28:39.489173  294229 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:28:39.489243  294229 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:28:39.679220  294229 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:28:39.680872  294229 out.go:204]   - Booting up control plane ...
	I0729 13:28:39.680991  294229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:28:39.691046  294229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:28:39.693022  294229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:28:39.694755  294229 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:28:39.702858  294229 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:29:19.663182  294229 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:29:19.664049  294229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:29:19.664280  294229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:29:24.663563  294229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:29:24.663867  294229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:29:34.663304  294229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:29:34.663524  294229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:29:54.663550  294229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:29:54.663789  294229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:30:34.662450  294229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:30:34.662684  294229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:30:34.662694  294229 kubeadm.go:310] 
	I0729 13:30:34.662729  294229 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:30:34.662820  294229 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:30:34.662844  294229 kubeadm.go:310] 
	I0729 13:30:34.662892  294229 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:30:34.662933  294229 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:30:34.663121  294229 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:30:34.663138  294229 kubeadm.go:310] 
	I0729 13:30:34.663258  294229 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:30:34.663308  294229 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:30:34.663349  294229 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:30:34.663356  294229 kubeadm.go:310] 
	I0729 13:30:34.663442  294229 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:30:34.663511  294229 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:30:34.663517  294229 kubeadm.go:310] 
	I0729 13:30:34.663596  294229 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:30:34.663685  294229 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:30:34.663748  294229 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:30:34.663803  294229 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:30:34.663811  294229 kubeadm.go:310] 
	I0729 13:30:34.664630  294229 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:30:34.664742  294229 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:30:34.664870  294229 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 13:30:34.665016  294229 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-924039] and IPs [192.168.39.227 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-924039] and IPs [192.168.39.227 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-924039] and IPs [192.168.39.227 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-924039] and IPs [192.168.39.227 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 13:30:34.665076  294229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:30:35.128055  294229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:30:35.142434  294229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:30:35.152427  294229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:30:35.152452  294229 kubeadm.go:157] found existing configuration files:
	
	I0729 13:30:35.152510  294229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:30:35.161836  294229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:30:35.161885  294229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:30:35.172059  294229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:30:35.182025  294229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:30:35.182127  294229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:30:35.192101  294229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:30:35.201969  294229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:30:35.202038  294229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:30:35.212305  294229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:30:35.222309  294229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:30:35.222356  294229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:30:35.232550  294229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:30:35.305273  294229 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:30:35.305413  294229 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:30:35.449243  294229 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:30:35.449394  294229 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:30:35.449560  294229 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:30:35.634288  294229 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:30:35.636407  294229 out.go:204]   - Generating certificates and keys ...
	I0729 13:30:35.636496  294229 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:30:35.636576  294229 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:30:35.636701  294229 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:30:35.636785  294229 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:30:35.636913  294229 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:30:35.636964  294229 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:30:35.637053  294229 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:30:35.637486  294229 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:30:35.638308  294229 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:30:35.638961  294229 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:30:35.639293  294229 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:30:35.639348  294229 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:30:36.012519  294229 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:30:36.080894  294229 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:30:36.363590  294229 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:30:36.506138  294229 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:30:36.524246  294229 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:30:36.525303  294229 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:30:36.525352  294229 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:30:36.697308  294229 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:30:36.699365  294229 out.go:204]   - Booting up control plane ...
	I0729 13:30:36.699473  294229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:30:36.712353  294229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:30:36.716076  294229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:30:36.717069  294229 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:30:36.720434  294229 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:31:16.722699  294229 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:31:16.723291  294229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:31:16.723650  294229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:31:21.724218  294229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:31:21.724413  294229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:31:31.725272  294229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:31:31.725483  294229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:31:51.727171  294229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:31:51.727381  294229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:32:31.726079  294229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:32:31.726288  294229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:32:31.726300  294229 kubeadm.go:310] 
	I0729 13:32:31.726349  294229 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:32:31.726433  294229 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:32:31.726455  294229 kubeadm.go:310] 
	I0729 13:32:31.726510  294229 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:32:31.726559  294229 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:32:31.726713  294229 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:32:31.726729  294229 kubeadm.go:310] 
	I0729 13:32:31.726881  294229 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:32:31.726931  294229 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:32:31.726981  294229 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:32:31.726997  294229 kubeadm.go:310] 
	I0729 13:32:31.727140  294229 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:32:31.727256  294229 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:32:31.727266  294229 kubeadm.go:310] 
	I0729 13:32:31.727404  294229 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:32:31.727515  294229 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:32:31.727613  294229 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:32:31.727718  294229 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:32:31.727727  294229 kubeadm.go:310] 
	I0729 13:32:31.728592  294229 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:32:31.728696  294229 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:32:31.728804  294229 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 13:32:31.728886  294229 kubeadm.go:394] duration metric: took 3m55.746038528s to StartCluster
	I0729 13:32:31.728939  294229 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:32:31.729008  294229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:32:31.774450  294229 cri.go:89] found id: ""
	I0729 13:32:31.774489  294229 logs.go:276] 0 containers: []
	W0729 13:32:31.774501  294229 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:32:31.774509  294229 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:32:31.774579  294229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:32:31.816653  294229 cri.go:89] found id: ""
	I0729 13:32:31.816683  294229 logs.go:276] 0 containers: []
	W0729 13:32:31.816692  294229 logs.go:278] No container was found matching "etcd"
	I0729 13:32:31.816700  294229 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:32:31.816766  294229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:32:31.852190  294229 cri.go:89] found id: ""
	I0729 13:32:31.852221  294229 logs.go:276] 0 containers: []
	W0729 13:32:31.852229  294229 logs.go:278] No container was found matching "coredns"
	I0729 13:32:31.852235  294229 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:32:31.852291  294229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:32:31.885973  294229 cri.go:89] found id: ""
	I0729 13:32:31.886005  294229 logs.go:276] 0 containers: []
	W0729 13:32:31.886015  294229 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:32:31.886024  294229 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:32:31.886085  294229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:32:31.919926  294229 cri.go:89] found id: ""
	I0729 13:32:31.919954  294229 logs.go:276] 0 containers: []
	W0729 13:32:31.919964  294229 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:32:31.919971  294229 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:32:31.920037  294229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:32:31.954979  294229 cri.go:89] found id: ""
	I0729 13:32:31.955018  294229 logs.go:276] 0 containers: []
	W0729 13:32:31.955030  294229 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:32:31.955038  294229 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:32:31.955105  294229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:32:31.988324  294229 cri.go:89] found id: ""
	I0729 13:32:31.988358  294229 logs.go:276] 0 containers: []
	W0729 13:32:31.988368  294229 logs.go:278] No container was found matching "kindnet"
	I0729 13:32:31.988382  294229 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:32:31.988411  294229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:32:32.099653  294229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:32:32.099681  294229 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:32:32.099693  294229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:32:32.197291  294229 logs.go:123] Gathering logs for container status ...
	I0729 13:32:32.197332  294229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:32:32.266009  294229 logs.go:123] Gathering logs for kubelet ...
	I0729 13:32:32.266043  294229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:32:32.323659  294229 logs.go:123] Gathering logs for dmesg ...
	I0729 13:32:32.323693  294229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 13:32:32.337473  294229 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 13:32:32.337523  294229 out.go:239] * 
	* 
	W0729 13:32:32.337586  294229 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:32:32.337609  294229 out.go:239] * 
	* 
	W0729 13:32:32.338492  294229 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:32:32.341438  294229 out.go:177] 
	W0729 13:32:32.342829  294229 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:32:32.342874  294229 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 13:32:32.342911  294229 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 13:32:32.344530  294229 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-924039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 6 (217.274567ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:32:32.600423  300499 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-924039" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (284.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-135920 --alsologtostderr -v=3
E0729 13:30:06.025364  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:06.045708  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:06.086148  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:06.166744  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:06.327073  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:06.647714  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-135920 --alsologtostderr -v=3: exit status 82 (2m0.610924467s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-135920"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:30:06.058609  299603 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:30:06.058714  299603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:30:06.058722  299603 out.go:304] Setting ErrFile to fd 2...
	I0729 13:30:06.058726  299603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:30:06.058891  299603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:30:06.059122  299603 out.go:298] Setting JSON to false
	I0729 13:30:06.059196  299603 mustload.go:65] Loading cluster: embed-certs-135920
	I0729 13:30:06.059516  299603 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:30:06.059583  299603 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/config.json ...
	I0729 13:30:06.059755  299603 mustload.go:65] Loading cluster: embed-certs-135920
	I0729 13:30:06.059855  299603 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:30:06.059885  299603 stop.go:39] StopHost: embed-certs-135920
	I0729 13:30:06.060266  299603 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:30:06.060310  299603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:30:06.076770  299603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36613
	I0729 13:30:06.077252  299603 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:30:06.077933  299603 main.go:141] libmachine: Using API Version  1
	I0729 13:30:06.077967  299603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:30:06.078392  299603 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:30:06.081010  299603 out.go:177] * Stopping node "embed-certs-135920"  ...
	I0729 13:30:06.082483  299603 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 13:30:06.082520  299603 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:30:06.082788  299603 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 13:30:06.082834  299603 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:30:06.086102  299603 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:30:06.086599  299603 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:29:08 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:30:06.086639  299603 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:30:06.086792  299603 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:30:06.086955  299603 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:30:06.087233  299603 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:30:06.087396  299603 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:30:06.287770  299603 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 13:30:06.369323  299603 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 13:30:06.430793  299603 main.go:141] libmachine: Stopping "embed-certs-135920"...
	I0729 13:30:06.430824  299603 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:30:06.432591  299603 main.go:141] libmachine: (embed-certs-135920) Calling .Stop
	I0729 13:30:06.436331  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 0/120
	I0729 13:30:07.437879  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 1/120
	I0729 13:30:08.439305  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 2/120
	I0729 13:30:09.440784  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 3/120
	I0729 13:30:10.442478  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 4/120
	I0729 13:30:11.443950  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 5/120
	I0729 13:30:12.445517  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 6/120
	I0729 13:30:13.446876  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 7/120
	I0729 13:30:14.448251  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 8/120
	I0729 13:30:15.450287  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 9/120
	I0729 13:30:16.452688  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 10/120
	I0729 13:30:17.454208  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 11/120
	I0729 13:30:18.455450  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 12/120
	I0729 13:30:19.456682  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 13/120
	I0729 13:30:20.458126  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 14/120
	I0729 13:30:21.460035  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 15/120
	I0729 13:30:22.461461  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 16/120
	I0729 13:30:23.462782  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 17/120
	I0729 13:30:24.464822  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 18/120
	I0729 13:30:25.466422  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 19/120
	I0729 13:30:26.468411  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 20/120
	I0729 13:30:27.470068  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 21/120
	I0729 13:30:28.471696  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 22/120
	I0729 13:30:29.473681  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 23/120
	I0729 13:30:30.475363  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 24/120
	I0729 13:30:31.477369  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 25/120
	I0729 13:30:32.478625  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 26/120
	I0729 13:30:33.480126  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 27/120
	I0729 13:30:34.481419  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 28/120
	I0729 13:30:35.483582  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 29/120
	I0729 13:30:36.485371  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 30/120
	I0729 13:30:37.486588  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 31/120
	I0729 13:30:38.487840  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 32/120
	I0729 13:30:39.489517  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 33/120
	I0729 13:30:40.491366  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 34/120
	I0729 13:30:41.493121  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 35/120
	I0729 13:30:42.495160  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 36/120
	I0729 13:30:43.496207  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 37/120
	I0729 13:30:44.497573  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 38/120
	I0729 13:30:45.498789  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 39/120
	I0729 13:30:46.500985  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 40/120
	I0729 13:30:47.502364  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 41/120
	I0729 13:30:48.503573  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 42/120
	I0729 13:30:49.504832  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 43/120
	I0729 13:30:50.505946  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 44/120
	I0729 13:30:51.508070  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 45/120
	I0729 13:30:52.509339  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 46/120
	I0729 13:30:53.510506  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 47/120
	I0729 13:30:54.511705  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 48/120
	I0729 13:30:55.512680  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 49/120
	I0729 13:30:56.514073  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 50/120
	I0729 13:30:57.515167  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 51/120
	I0729 13:30:58.516619  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 52/120
	I0729 13:30:59.517918  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 53/120
	I0729 13:31:00.519393  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 54/120
	I0729 13:31:01.521325  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 55/120
	I0729 13:31:02.522497  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 56/120
	I0729 13:31:03.523630  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 57/120
	I0729 13:31:04.524745  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 58/120
	I0729 13:31:05.526023  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 59/120
	I0729 13:31:06.528183  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 60/120
	I0729 13:31:07.529460  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 61/120
	I0729 13:31:08.531127  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 62/120
	I0729 13:31:09.532387  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 63/120
	I0729 13:31:10.533801  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 64/120
	I0729 13:31:11.535783  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 65/120
	I0729 13:31:12.537019  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 66/120
	I0729 13:31:13.539301  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 67/120
	I0729 13:31:14.540432  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 68/120
	I0729 13:31:15.541794  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 69/120
	I0729 13:31:16.544105  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 70/120
	I0729 13:31:17.545446  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 71/120
	I0729 13:31:18.546520  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 72/120
	I0729 13:31:19.547794  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 73/120
	I0729 13:31:20.548871  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 74/120
	I0729 13:31:21.550977  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 75/120
	I0729 13:31:22.552300  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 76/120
	I0729 13:31:23.553559  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 77/120
	I0729 13:31:24.554776  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 78/120
	I0729 13:31:25.556138  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 79/120
	I0729 13:31:26.558013  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 80/120
	I0729 13:31:27.559105  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 81/120
	I0729 13:31:28.560146  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 82/120
	I0729 13:31:29.561212  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 83/120
	I0729 13:31:30.562392  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 84/120
	I0729 13:31:31.564386  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 85/120
	I0729 13:31:32.565543  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 86/120
	I0729 13:31:33.567027  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 87/120
	I0729 13:31:34.567998  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 88/120
	I0729 13:31:35.569312  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 89/120
	I0729 13:31:36.571152  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 90/120
	I0729 13:31:37.572504  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 91/120
	I0729 13:31:38.573643  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 92/120
	I0729 13:31:39.575143  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 93/120
	I0729 13:31:40.576303  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 94/120
	I0729 13:31:41.577913  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 95/120
	I0729 13:31:42.579056  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 96/120
	I0729 13:31:43.580343  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 97/120
	I0729 13:31:44.581440  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 98/120
	I0729 13:31:45.583142  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 99/120
	I0729 13:31:46.585112  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 100/120
	I0729 13:31:47.586931  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 101/120
	I0729 13:31:48.588140  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 102/120
	I0729 13:31:49.589184  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 103/120
	I0729 13:31:50.591233  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 104/120
	I0729 13:31:51.593089  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 105/120
	I0729 13:31:52.595070  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 106/120
	I0729 13:31:53.596058  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 107/120
	I0729 13:31:54.597394  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 108/120
	I0729 13:31:55.599189  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 109/120
	I0729 13:31:56.600445  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 110/120
	I0729 13:31:57.601571  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 111/120
	I0729 13:31:58.602600  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 112/120
	I0729 13:31:59.603669  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 113/120
	I0729 13:32:00.604765  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 114/120
	I0729 13:32:01.606678  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 115/120
	I0729 13:32:02.607854  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 116/120
	I0729 13:32:03.608829  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 117/120
	I0729 13:32:04.610093  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 118/120
	I0729 13:32:05.611268  299603 main.go:141] libmachine: (embed-certs-135920) Waiting for machine to stop 119/120
	I0729 13:32:06.612575  299603 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 13:32:06.612661  299603 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 13:32:06.614888  299603 out.go:177] 
	W0729 13:32:06.616550  299603 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 13:32:06.616578  299603 out.go:239] * 
	* 
	W0729 13:32:06.619611  299603 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:32:06.621298  299603 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-135920 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-135920 -n embed-certs-135920
E0729 13:32:06.891440  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-135920 -n embed-certs-135920: exit status 3 (18.590537206s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:32:25.213153  300252 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.207:22: connect: no route to host
	E0729 13:32:25.213173  300252 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.207:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-135920" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-566777 --alsologtostderr -v=3
E0729 13:30:07.288839  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:08.569718  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:11.130654  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:16.251705  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:26.492727  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-566777 --alsologtostderr -v=3: exit status 82 (2m0.481976362s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-566777"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:30:07.056059  299642 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:30:07.056173  299642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:30:07.056181  299642 out.go:304] Setting ErrFile to fd 2...
	I0729 13:30:07.056185  299642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:30:07.056427  299642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:30:07.056661  299642 out.go:298] Setting JSON to false
	I0729 13:30:07.056738  299642 mustload.go:65] Loading cluster: no-preload-566777
	I0729 13:30:07.057140  299642 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:30:07.057241  299642 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/config.json ...
	I0729 13:30:07.057416  299642 mustload.go:65] Loading cluster: no-preload-566777
	I0729 13:30:07.057520  299642 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:30:07.057551  299642 stop.go:39] StopHost: no-preload-566777
	I0729 13:30:07.058637  299642 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:30:07.058706  299642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:30:07.074038  299642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0729 13:30:07.074502  299642 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:30:07.075131  299642 main.go:141] libmachine: Using API Version  1
	I0729 13:30:07.075159  299642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:30:07.075565  299642 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:30:07.077906  299642 out.go:177] * Stopping node "no-preload-566777"  ...
	I0729 13:30:07.079211  299642 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 13:30:07.079255  299642 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:30:07.079490  299642 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 13:30:07.079513  299642 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:30:07.082392  299642 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:30:07.082828  299642 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:28:42 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:30:07.082857  299642 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:30:07.083005  299642 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:30:07.083188  299642 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:30:07.083345  299642 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:30:07.083477  299642 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:30:07.180595  299642 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 13:30:07.235429  299642 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 13:30:07.291859  299642 main.go:141] libmachine: Stopping "no-preload-566777"...
	I0729 13:30:07.291890  299642 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:30:07.293454  299642 main.go:141] libmachine: (no-preload-566777) Calling .Stop
	I0729 13:30:07.296786  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 0/120
	I0729 13:30:08.298181  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 1/120
	I0729 13:30:09.299518  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 2/120
	I0729 13:30:10.300687  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 3/120
	I0729 13:30:11.301971  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 4/120
	I0729 13:30:12.304359  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 5/120
	I0729 13:30:13.306048  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 6/120
	I0729 13:30:14.307932  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 7/120
	I0729 13:30:15.309269  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 8/120
	I0729 13:30:16.310787  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 9/120
	I0729 13:30:17.313127  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 10/120
	I0729 13:30:18.315414  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 11/120
	I0729 13:30:19.316645  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 12/120
	I0729 13:30:20.317956  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 13/120
	I0729 13:30:21.319303  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 14/120
	I0729 13:30:22.320837  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 15/120
	I0729 13:30:23.322104  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 16/120
	I0729 13:30:24.323324  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 17/120
	I0729 13:30:25.324575  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 18/120
	I0729 13:30:26.326022  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 19/120
	I0729 13:30:27.328308  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 20/120
	I0729 13:30:28.329597  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 21/120
	I0729 13:30:29.331022  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 22/120
	I0729 13:30:30.332366  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 23/120
	I0729 13:30:31.333843  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 24/120
	I0729 13:30:32.335342  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 25/120
	I0729 13:30:33.337064  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 26/120
	I0729 13:30:34.338460  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 27/120
	I0729 13:30:35.340515  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 28/120
	I0729 13:30:36.341906  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 29/120
	I0729 13:30:37.343947  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 30/120
	I0729 13:30:38.345187  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 31/120
	I0729 13:30:39.347248  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 32/120
	I0729 13:30:40.348653  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 33/120
	I0729 13:30:41.349984  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 34/120
	I0729 13:30:42.352111  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 35/120
	I0729 13:30:43.353485  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 36/120
	I0729 13:30:44.354890  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 37/120
	I0729 13:30:45.356264  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 38/120
	I0729 13:30:46.357673  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 39/120
	I0729 13:30:47.359122  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 40/120
	I0729 13:30:48.360408  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 41/120
	I0729 13:30:49.361769  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 42/120
	I0729 13:30:50.363154  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 43/120
	I0729 13:30:51.364470  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 44/120
	I0729 13:30:52.366445  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 45/120
	I0729 13:30:53.367975  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 46/120
	I0729 13:30:54.369375  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 47/120
	I0729 13:30:55.370729  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 48/120
	I0729 13:30:56.372196  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 49/120
	I0729 13:30:57.374717  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 50/120
	I0729 13:30:58.376352  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 51/120
	I0729 13:30:59.377808  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 52/120
	I0729 13:31:00.379302  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 53/120
	I0729 13:31:01.380839  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 54/120
	I0729 13:31:02.383097  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 55/120
	I0729 13:31:03.384688  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 56/120
	I0729 13:31:04.386043  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 57/120
	I0729 13:31:05.387511  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 58/120
	I0729 13:31:06.388923  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 59/120
	I0729 13:31:07.390823  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 60/120
	I0729 13:31:08.392262  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 61/120
	I0729 13:31:09.393603  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 62/120
	I0729 13:31:10.395025  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 63/120
	I0729 13:31:11.396342  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 64/120
	I0729 13:31:12.398333  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 65/120
	I0729 13:31:13.399644  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 66/120
	I0729 13:31:14.400901  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 67/120
	I0729 13:31:15.402341  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 68/120
	I0729 13:31:16.403601  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 69/120
	I0729 13:31:17.405632  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 70/120
	I0729 13:31:18.407115  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 71/120
	I0729 13:31:19.408527  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 72/120
	I0729 13:31:20.409996  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 73/120
	I0729 13:31:21.411513  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 74/120
	I0729 13:31:22.413474  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 75/120
	I0729 13:31:23.414823  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 76/120
	I0729 13:31:24.416120  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 77/120
	I0729 13:31:25.417366  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 78/120
	I0729 13:31:26.418733  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 79/120
	I0729 13:31:27.420958  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 80/120
	I0729 13:31:28.422278  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 81/120
	I0729 13:31:29.423528  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 82/120
	I0729 13:31:30.424907  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 83/120
	I0729 13:31:31.426323  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 84/120
	I0729 13:31:32.428398  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 85/120
	I0729 13:31:33.429805  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 86/120
	I0729 13:31:34.431312  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 87/120
	I0729 13:31:35.432815  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 88/120
	I0729 13:31:36.434186  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 89/120
	I0729 13:31:37.436455  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 90/120
	I0729 13:31:38.437810  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 91/120
	I0729 13:31:39.439288  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 92/120
	I0729 13:31:40.441644  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 93/120
	I0729 13:31:41.443437  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 94/120
	I0729 13:31:42.445604  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 95/120
	I0729 13:31:43.447348  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 96/120
	I0729 13:31:44.448897  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 97/120
	I0729 13:31:45.450284  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 98/120
	I0729 13:31:46.451636  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 99/120
	I0729 13:31:47.454074  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 100/120
	I0729 13:31:48.455462  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 101/120
	I0729 13:31:49.456955  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 102/120
	I0729 13:31:50.458217  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 103/120
	I0729 13:31:51.459641  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 104/120
	I0729 13:31:52.461302  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 105/120
	I0729 13:31:53.462869  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 106/120
	I0729 13:31:54.464219  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 107/120
	I0729 13:31:55.465740  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 108/120
	I0729 13:31:56.467046  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 109/120
	I0729 13:31:57.469522  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 110/120
	I0729 13:31:58.470928  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 111/120
	I0729 13:31:59.472389  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 112/120
	I0729 13:32:00.473785  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 113/120
	I0729 13:32:01.475304  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 114/120
	I0729 13:32:02.477444  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 115/120
	I0729 13:32:03.478901  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 116/120
	I0729 13:32:04.480300  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 117/120
	I0729 13:32:05.481582  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 118/120
	I0729 13:32:06.483012  299642 main.go:141] libmachine: (no-preload-566777) Waiting for machine to stop 119/120
	I0729 13:32:07.484041  299642 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 13:32:07.484101  299642 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 13:32:07.485872  299642 out.go:177] 
	W0729 13:32:07.487572  299642 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 13:32:07.487595  299642 out.go:239] * 
	* 
	W0729 13:32:07.490457  299642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:32:07.491879  299642 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-566777 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-566777 -n no-preload-566777
E0729 13:32:17.132619  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:32:17.403190  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:32:17.408486  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:32:17.418754  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:32:17.438993  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:32:17.479431  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:32:17.559801  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:32:17.720259  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:32:18.040924  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:32:18.313471  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 13:32:18.681575  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:32:19.962361  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:32:22.522895  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-566777 -n no-preload-566777: exit status 3 (18.487711567s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:32:25.981096  300282 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host
	E0729 13:32:25.981115  300282 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-566777" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-972693 --alsologtostderr -v=3
E0729 13:30:40.192014  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:42.752787  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:46.973880  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:47.873178  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:50.929715  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 13:30:58.113511  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:31:18.593917  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:31:27.934027  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:31:56.649867  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:31:56.655178  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:31:56.665487  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:31:56.685797  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:31:56.726139  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:31:56.806590  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:31:56.967015  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:31:57.287785  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:31:57.928497  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:31:59.209710  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:31:59.554784  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:32:01.770385  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-972693 --alsologtostderr -v=3: exit status 82 (2m0.538969012s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-972693"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:30:40.171228  299920 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:30:40.171471  299920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:30:40.171479  299920 out.go:304] Setting ErrFile to fd 2...
	I0729 13:30:40.171484  299920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:30:40.171721  299920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:30:40.172306  299920 out.go:298] Setting JSON to false
	I0729 13:30:40.172416  299920 mustload.go:65] Loading cluster: default-k8s-diff-port-972693
	I0729 13:30:40.173332  299920 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:30:40.173444  299920 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/config.json ...
	I0729 13:30:40.173647  299920 mustload.go:65] Loading cluster: default-k8s-diff-port-972693
	I0729 13:30:40.173783  299920 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:30:40.173825  299920 stop.go:39] StopHost: default-k8s-diff-port-972693
	I0729 13:30:40.174273  299920 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:30:40.174314  299920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:30:40.189384  299920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
	I0729 13:30:40.189856  299920 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:30:40.190443  299920 main.go:141] libmachine: Using API Version  1
	I0729 13:30:40.190465  299920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:30:40.190808  299920 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:30:40.193338  299920 out.go:177] * Stopping node "default-k8s-diff-port-972693"  ...
	I0729 13:30:40.194810  299920 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 13:30:40.194846  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:30:40.195070  299920 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 13:30:40.195095  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:30:40.197921  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:30:40.198382  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:30:40.198410  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:30:40.198560  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:30:40.198733  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:30:40.198898  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:30:40.199042  299920 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:30:40.328159  299920 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 13:30:40.393096  299920 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 13:30:40.463947  299920 main.go:141] libmachine: Stopping "default-k8s-diff-port-972693"...
	I0729 13:30:40.463974  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:30:40.465528  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Stop
	I0729 13:30:40.468644  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 0/120
	I0729 13:30:41.470575  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 1/120
	I0729 13:30:42.472239  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 2/120
	I0729 13:30:43.473836  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 3/120
	I0729 13:30:44.475405  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 4/120
	I0729 13:30:45.477597  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 5/120
	I0729 13:30:46.479029  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 6/120
	I0729 13:30:47.480463  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 7/120
	I0729 13:30:48.481747  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 8/120
	I0729 13:30:49.483039  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 9/120
	I0729 13:30:50.485447  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 10/120
	I0729 13:30:51.486847  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 11/120
	I0729 13:30:52.488237  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 12/120
	I0729 13:30:53.489710  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 13/120
	I0729 13:30:54.490999  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 14/120
	I0729 13:30:55.492717  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 15/120
	I0729 13:30:56.494147  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 16/120
	I0729 13:30:57.495577  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 17/120
	I0729 13:30:58.497034  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 18/120
	I0729 13:30:59.498485  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 19/120
	I0729 13:31:00.499624  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 20/120
	I0729 13:31:01.501181  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 21/120
	I0729 13:31:02.502389  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 22/120
	I0729 13:31:03.503876  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 23/120
	I0729 13:31:04.505498  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 24/120
	I0729 13:31:05.507818  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 25/120
	I0729 13:31:06.509105  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 26/120
	I0729 13:31:07.510567  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 27/120
	I0729 13:31:08.511929  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 28/120
	I0729 13:31:09.513512  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 29/120
	I0729 13:31:10.515788  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 30/120
	I0729 13:31:11.517324  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 31/120
	I0729 13:31:12.518741  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 32/120
	I0729 13:31:13.520287  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 33/120
	I0729 13:31:14.521754  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 34/120
	I0729 13:31:15.523650  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 35/120
	I0729 13:31:16.525001  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 36/120
	I0729 13:31:17.526499  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 37/120
	I0729 13:31:18.527804  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 38/120
	I0729 13:31:19.529194  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 39/120
	I0729 13:31:20.531665  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 40/120
	I0729 13:31:21.533017  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 41/120
	I0729 13:31:22.534315  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 42/120
	I0729 13:31:23.535889  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 43/120
	I0729 13:31:24.537349  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 44/120
	I0729 13:31:25.539501  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 45/120
	I0729 13:31:26.540891  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 46/120
	I0729 13:31:27.542229  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 47/120
	I0729 13:31:28.543687  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 48/120
	I0729 13:31:29.545118  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 49/120
	I0729 13:31:30.547596  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 50/120
	I0729 13:31:31.549049  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 51/120
	I0729 13:31:32.550412  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 52/120
	I0729 13:31:33.551880  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 53/120
	I0729 13:31:34.553266  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 54/120
	I0729 13:31:35.555337  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 55/120
	I0729 13:31:36.556693  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 56/120
	I0729 13:31:37.558319  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 57/120
	I0729 13:31:38.559528  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 58/120
	I0729 13:31:39.560971  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 59/120
	I0729 13:31:40.563225  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 60/120
	I0729 13:31:41.564720  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 61/120
	I0729 13:31:42.566234  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 62/120
	I0729 13:31:43.567799  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 63/120
	I0729 13:31:44.569190  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 64/120
	I0729 13:31:45.571357  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 65/120
	I0729 13:31:46.572868  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 66/120
	I0729 13:31:47.574278  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 67/120
	I0729 13:31:48.575733  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 68/120
	I0729 13:31:49.577323  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 69/120
	I0729 13:31:50.579708  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 70/120
	I0729 13:31:51.581262  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 71/120
	I0729 13:31:52.582752  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 72/120
	I0729 13:31:53.584158  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 73/120
	I0729 13:31:54.585718  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 74/120
	I0729 13:31:55.587875  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 75/120
	I0729 13:31:56.589347  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 76/120
	I0729 13:31:57.590730  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 77/120
	I0729 13:31:58.592100  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 78/120
	I0729 13:31:59.593377  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 79/120
	I0729 13:32:00.595639  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 80/120
	I0729 13:32:01.597143  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 81/120
	I0729 13:32:02.598468  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 82/120
	I0729 13:32:03.599704  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 83/120
	I0729 13:32:04.601189  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 84/120
	I0729 13:32:05.603107  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 85/120
	I0729 13:32:06.604552  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 86/120
	I0729 13:32:07.605685  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 87/120
	I0729 13:32:08.607469  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 88/120
	I0729 13:32:09.608890  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 89/120
	I0729 13:32:10.611350  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 90/120
	I0729 13:32:11.613799  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 91/120
	I0729 13:32:12.614998  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 92/120
	I0729 13:32:13.616288  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 93/120
	I0729 13:32:14.617696  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 94/120
	I0729 13:32:15.619845  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 95/120
	I0729 13:32:16.621353  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 96/120
	I0729 13:32:17.622745  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 97/120
	I0729 13:32:18.624176  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 98/120
	I0729 13:32:19.625716  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 99/120
	I0729 13:32:20.627899  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 100/120
	I0729 13:32:21.629429  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 101/120
	I0729 13:32:22.630689  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 102/120
	I0729 13:32:23.632183  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 103/120
	I0729 13:32:24.633491  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 104/120
	I0729 13:32:25.635583  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 105/120
	I0729 13:32:26.637149  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 106/120
	I0729 13:32:27.639236  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 107/120
	I0729 13:32:28.640689  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 108/120
	I0729 13:32:29.641975  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 109/120
	I0729 13:32:30.644169  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 110/120
	I0729 13:32:31.645783  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 111/120
	I0729 13:32:32.646997  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 112/120
	I0729 13:32:33.648342  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 113/120
	I0729 13:32:34.649340  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 114/120
	I0729 13:32:35.651232  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 115/120
	I0729 13:32:36.652856  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 116/120
	I0729 13:32:37.654040  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 117/120
	I0729 13:32:38.655337  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 118/120
	I0729 13:32:39.656828  299920 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for machine to stop 119/120
	I0729 13:32:40.658019  299920 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 13:32:40.658099  299920 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 13:32:40.660108  299920 out.go:177] 
	W0729 13:32:40.661648  299920 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 13:32:40.661668  299920 out.go:239] * 
	* 
	W0729 13:32:40.664517  299920 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:32:40.665841  299920 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-972693 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693
E0729 13:32:49.472352  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:32:49.854453  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:32:58.364611  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693: exit status 3 (18.59336077s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:32:59.261184  300780 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.34:22: connect: no route to host
	E0729 13:32:59.261207  300780 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.34:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-972693" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-135920 -n embed-certs-135920
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-135920 -n embed-certs-135920: exit status 3 (3.167687188s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:32:28.381125  300362 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.207:22: connect: no route to host
	E0729 13:32:28.381149  300362 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.207:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-135920 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 13:32:28.991359  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:32:28.996610  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:32:29.006845  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:32:29.027167  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:32:29.067481  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:32:29.147918  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-135920 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153143439s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.207:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-135920 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-135920 -n embed-certs-135920
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-135920 -n embed-certs-135920: exit status 3 (3.062579118s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:32:37.597226  300629 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.207:22: connect: no route to host
	E0729 13:32:37.597252  300629 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.207:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-135920" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-566777 -n no-preload-566777
E0729 13:32:27.643490  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-566777 -n no-preload-566777: exit status 3 (3.167643405s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:32:29.149103  300415 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host
	E0729 13:32:29.149122  300415 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-566777 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 13:32:29.308581  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:32:29.629208  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:32:30.269777  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:32:31.550313  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-566777 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152929588s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-566777 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-566777 -n no-preload-566777
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-566777 -n no-preload-566777: exit status 3 (3.062896039s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:32:38.365219  300659 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host
	E0729 13:32:38.365240  300659 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-566777" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-924039 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-924039 create -f testdata/busybox.yaml: exit status 1 (44.954985ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-924039" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-924039 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 6 (216.502762ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:32:32.863447  300539 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-924039" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 6 (213.241127ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:32:33.076723  300569 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-924039" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-924039 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0729 13:32:34.111143  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-924039 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.103747001s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-924039 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-924039 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-924039 describe deploy/metrics-server -n kube-system: exit status 1 (44.112428ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-924039" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-924039 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 6 (215.073055ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:34:08.440586  301286 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-924039" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693: exit status 3 (3.167563846s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:33:02.429206  300881 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.34:22: connect: no route to host
	E0729 13:33:02.429230  300881 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.34:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-972693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 13:33:05.413648  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:05.418914  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:05.429165  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:05.449451  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:05.489809  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:05.570175  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:05.730606  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:06.051436  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:06.691626  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:07.971863  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-972693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153776888s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.34:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-972693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693
E0729 13:33:09.952841  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:33:10.532448  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693: exit status 3 (3.062314052s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:33:11.645216  300988 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.34:22: connect: no route to host
	E0729 13:33:11.645243  300988 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.34:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-972693" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (735.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-924039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0729 13:34:27.334303  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:34:27.401554  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:34:27.881239  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 13:34:40.495620  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:35:01.245566  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:35:06.009816  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:35:08.361738  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:35:12.834467  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:35:33.695876  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:35:37.633590  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:35:49.255262  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:36:05.316716  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:36:30.283112  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:36:56.650213  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:37:17.402444  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:37:18.313010  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 13:37:24.336553  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:37:28.991210  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:37:45.086762  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:37:56.674972  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:38:05.412343  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:38:33.095482  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:38:46.439578  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:39:14.124287  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:39:27.881425  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 13:40:06.010097  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:40:21.366614  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 13:40:37.632757  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:41:56.649519  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:42:17.402248  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:42:18.313857  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 13:42:28.991602  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-924039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m12.11947863s)

                                                
                                                
-- stdout --
	* [old-k8s-version-924039] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-924039" primary control-plane node in "old-k8s-version-924039" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-924039" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:34:10.969228  301425 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:34:10.969348  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969356  301425 out.go:304] Setting ErrFile to fd 2...
	I0729 13:34:10.969360  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969506  301425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:34:10.970007  301425 out.go:298] Setting JSON to false
	I0729 13:34:10.970908  301425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11794,"bootTime":1722248257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:34:10.970971  301425 start.go:139] virtualization: kvm guest
	I0729 13:34:10.973245  301425 out.go:177] * [old-k8s-version-924039] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:34:10.974804  301425 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:34:10.974803  301425 notify.go:220] Checking for updates...
	I0729 13:34:10.977011  301425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:34:10.978270  301425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:34:10.979473  301425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:34:10.980743  301425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:34:10.981923  301425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:34:10.983514  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:34:10.983962  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:10.984049  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:10.998985  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0729 13:34:10.999407  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:10.999928  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:10.999951  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.000306  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.000497  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.002455  301425 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 13:34:11.003702  301425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:34:11.003997  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:11.004037  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:11.018707  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I0729 13:34:11.019177  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:11.019653  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:11.019676  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.019968  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.020126  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.055819  301425 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:34:11.057085  301425 start.go:297] selected driver: kvm2
	I0729 13:34:11.057104  301425 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.057242  301425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:34:11.057967  301425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.058029  301425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:34:11.073706  301425 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:34:11.074089  301425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:34:11.074169  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:34:11.074188  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:34:11.074240  301425 start.go:340] cluster config:
	{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.074366  301425 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.076296  301425 out.go:177] * Starting "old-k8s-version-924039" primary control-plane node in "old-k8s-version-924039" cluster
	I0729 13:34:11.077828  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:34:11.077869  301425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:34:11.077879  301425 cache.go:56] Caching tarball of preloaded images
	I0729 13:34:11.077959  301425 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:34:11.077970  301425 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 13:34:11.078069  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:34:11.078241  301425 start.go:360] acquireMachinesLock for old-k8s-version-924039: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:37:53.401823  301425 start.go:364] duration metric: took 3m42.323534375s to acquireMachinesLock for "old-k8s-version-924039"
	I0729 13:37:53.401902  301425 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:53.401914  301425 fix.go:54] fixHost starting: 
	I0729 13:37:53.402310  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:53.402344  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:53.421973  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0729 13:37:53.422456  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:53.423079  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:37:53.423112  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:53.423508  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:53.423734  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:37:53.423883  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetState
	I0729 13:37:53.425687  301425 fix.go:112] recreateIfNeeded on old-k8s-version-924039: state=Stopped err=<nil>
	I0729 13:37:53.425733  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	W0729 13:37:53.425902  301425 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:53.427931  301425 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-924039" ...
	I0729 13:37:53.429298  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .Start
	I0729 13:37:53.429471  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring networks are active...
	I0729 13:37:53.430263  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network default is active
	I0729 13:37:53.430649  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network mk-old-k8s-version-924039 is active
	I0729 13:37:53.431011  301425 main.go:141] libmachine: (old-k8s-version-924039) Getting domain xml...
	I0729 13:37:53.431825  301425 main.go:141] libmachine: (old-k8s-version-924039) Creating domain...
	I0729 13:37:54.749878  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting to get IP...
	I0729 13:37:54.751148  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.751716  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.751784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.751696  302377 retry.go:31] will retry after 230.330776ms: waiting for machine to come up
	I0729 13:37:54.984551  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.985138  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.985183  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.985094  302377 retry.go:31] will retry after 291.000555ms: waiting for machine to come up
	I0729 13:37:55.277730  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.278199  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.278220  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.278152  302377 retry.go:31] will retry after 360.474919ms: waiting for machine to come up
	I0729 13:37:55.640675  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.641255  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.641288  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.641207  302377 retry.go:31] will retry after 480.424143ms: waiting for machine to come up
	I0729 13:37:56.123854  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.124460  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.124487  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.124433  302377 retry.go:31] will retry after 529.614291ms: waiting for machine to come up
	I0729 13:37:56.656136  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.656626  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.656657  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.656599  302377 retry.go:31] will retry after 794.429248ms: waiting for machine to come up
	I0729 13:37:57.452523  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:57.453001  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:57.453033  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:57.452952  302377 retry.go:31] will retry after 1.140583184s: waiting for machine to come up
	I0729 13:37:58.594636  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:58.595067  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:58.595109  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:58.595024  302377 retry.go:31] will retry after 894.563974ms: waiting for machine to come up
	I0729 13:37:59.491447  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:59.492094  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:59.492120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:59.491993  302377 retry.go:31] will retry after 1.145531829s: waiting for machine to come up
	I0729 13:38:00.639387  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:00.639807  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:00.639838  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:00.639754  302377 retry.go:31] will retry after 1.949675091s: waiting for machine to come up
	I0729 13:38:02.590640  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:02.591134  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:02.591162  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:02.591087  302377 retry.go:31] will retry after 1.765945358s: waiting for machine to come up
	I0729 13:38:04.358332  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:04.358934  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:04.358963  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:04.358899  302377 retry.go:31] will retry after 2.923224015s: waiting for machine to come up
	I0729 13:38:07.284234  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:07.284764  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:07.284819  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:07.284694  302377 retry.go:31] will retry after 2.9786525s: waiting for machine to come up
	I0729 13:38:10.265771  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:10.266128  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:10.266161  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:10.266077  302377 retry.go:31] will retry after 5.044155966s: waiting for machine to come up
	I0729 13:38:15.313102  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313621  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has current primary IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313650  301425 main.go:141] libmachine: (old-k8s-version-924039) Found IP for machine: 192.168.39.227
	I0729 13:38:15.313665  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserving static IP address...
	I0729 13:38:15.314120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.314168  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | skip adding static IP to network mk-old-k8s-version-924039 - found existing host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"}
	I0729 13:38:15.314187  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserved static IP address: 192.168.39.227
	I0729 13:38:15.314205  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting for SSH to be available...
	I0729 13:38:15.314219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Getting to WaitForSSH function...
	I0729 13:38:15.316468  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316779  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.316827  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316994  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH client type: external
	I0729 13:38:15.317013  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa (-rw-------)
	I0729 13:38:15.317042  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:15.317054  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | About to run SSH command:
	I0729 13:38:15.317076  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | exit 0
	I0729 13:38:15.444818  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:15.445203  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetConfigRaw
	I0729 13:38:15.445858  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.448296  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.448784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.448834  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.449028  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:38:15.449208  301425 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:15.449226  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:15.449469  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.451695  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452017  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.452046  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.452420  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452606  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452770  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.452945  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.453151  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.453165  301425 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:15.561558  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:15.561590  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.561859  301425 buildroot.go:166] provisioning hostname "old-k8s-version-924039"
	I0729 13:38:15.561887  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.562079  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.564776  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565116  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.565157  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565286  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.565495  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565669  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565805  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.565952  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.566129  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.566140  301425 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-924039 && echo "old-k8s-version-924039" | sudo tee /etc/hostname
	I0729 13:38:15.687712  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-924039
	
	I0729 13:38:15.687744  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.690289  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690614  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.690638  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690864  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.691104  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691290  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691463  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.691649  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.691841  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.691869  301425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-924039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-924039/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-924039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:15.814102  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:15.814140  301425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:15.814190  301425 buildroot.go:174] setting up certificates
	I0729 13:38:15.814198  301425 provision.go:84] configureAuth start
	I0729 13:38:15.814210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.814521  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.817140  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817548  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.817583  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817728  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.819957  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820307  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.820335  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820476  301425 provision.go:143] copyHostCerts
	I0729 13:38:15.820529  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:15.820539  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:15.820592  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:15.820685  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:15.820693  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:15.820713  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:15.820772  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:15.820779  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:15.820828  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:15.820909  301425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-924039 san=[127.0.0.1 192.168.39.227 localhost minikube old-k8s-version-924039]
	I0729 13:38:15.895797  301425 provision.go:177] copyRemoteCerts
	I0729 13:38:15.895866  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:15.895898  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.898774  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899173  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.899214  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899444  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.899672  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.899882  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.900048  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:15.988091  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:16.019058  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 13:38:16.047266  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:16.072992  301425 provision.go:87] duration metric: took 258.777499ms to configureAuth
	I0729 13:38:16.073029  301425 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:16.073250  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:38:16.073338  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.075801  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.076219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076350  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.076560  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076750  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076972  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.077169  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.077354  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.077369  301425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:16.357614  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:16.357650  301425 machine.go:97] duration metric: took 908.424232ms to provisionDockerMachine
	I0729 13:38:16.357666  301425 start.go:293] postStartSetup for "old-k8s-version-924039" (driver="kvm2")
	I0729 13:38:16.357680  301425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:16.357706  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.358060  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:16.358089  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.360841  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361257  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.361314  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361410  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.361645  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.361821  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.361987  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.448673  301425 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:16.453435  301425 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:16.453461  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:16.453543  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:16.453638  301425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:16.453763  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:16.464185  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:16.490358  301425 start.go:296] duration metric: took 132.675687ms for postStartSetup
	I0729 13:38:16.490422  301425 fix.go:56] duration metric: took 23.088507704s for fixHost
	I0729 13:38:16.490450  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.493249  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493571  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.493612  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493781  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.494046  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494241  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494388  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.494561  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.494759  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.494769  301425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 13:38:16.605903  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260296.583363181
	
	I0729 13:38:16.605930  301425 fix.go:216] guest clock: 1722260296.583363181
	I0729 13:38:16.605940  301425 fix.go:229] Guest: 2024-07-29 13:38:16.583363181 +0000 UTC Remote: 2024-07-29 13:38:16.490427183 +0000 UTC m=+245.556685019 (delta=92.935998ms)
	I0729 13:38:16.605967  301425 fix.go:200] guest clock delta is within tolerance: 92.935998ms
	I0729 13:38:16.605974  301425 start.go:83] releasing machines lock for "old-k8s-version-924039", held for 23.204101255s
	I0729 13:38:16.606006  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.606296  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:16.609324  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609669  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.609701  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609826  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610328  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610516  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610589  301425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:16.610673  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.610758  301425 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:16.610786  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.613356  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613639  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613689  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.613712  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613910  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614092  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.614112  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.614122  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614287  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614307  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614449  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.614496  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614635  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614771  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.719174  301425 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:16.726348  301425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:16.880130  301425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:16.886410  301425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:16.886484  301425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:16.904120  301425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:16.904151  301425 start.go:495] detecting cgroup driver to use...
	I0729 13:38:16.904222  301425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:16.927036  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:16.947380  301425 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:16.947448  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:16.964612  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:16.979266  301425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:17.108950  301425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:17.263118  301425 docker.go:233] disabling docker service ...
	I0729 13:38:17.263192  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:17.282563  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:17.299473  301425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:17.448598  301425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:17.568025  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:17.583700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:17.603159  301425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 13:38:17.603223  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.615655  301425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:17.615728  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.628639  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.640456  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.652160  301425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:17.663864  301425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:17.675293  301425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:17.675361  301425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:17.690427  301425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:17.702163  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:17.831401  301425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:17.985760  301425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:17.985851  301425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:17.990740  301425 start.go:563] Will wait 60s for crictl version
	I0729 13:38:17.990798  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:17.994741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:18.035793  301425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:18.035889  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.065036  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.097441  301425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 13:38:18.098840  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:18.102182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102629  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:18.102665  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102925  301425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:18.107544  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:18.122039  301425 kubeadm.go:883] updating cluster {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:18.122176  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:38:18.122249  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:18.169198  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:18.169279  301425 ssh_runner.go:195] Run: which lz4
	I0729 13:38:18.173861  301425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 13:38:18.178840  301425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:18.178881  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 13:38:19.887360  301425 crio.go:462] duration metric: took 1.713549828s to copy over tarball
	I0729 13:38:19.887450  301425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:22.836067  301425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.948583188s)
	I0729 13:38:22.836104  301425 crio.go:469] duration metric: took 2.948710335s to extract the tarball
	I0729 13:38:22.836114  301425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:22.878370  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:22.921339  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:22.921370  301425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:38:22.921445  301425 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.921545  301425 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.921547  301425 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 13:38:22.921633  301425 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:22.921475  301425 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.921479  301425 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923052  301425 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 13:38:22.923712  301425 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.923723  301425 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923733  301425 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.923743  301425 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.923803  301425 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.923923  301425 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.923976  301425 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.079335  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.095210  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.096664  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.109172  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.111720  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.114386  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.200545  301425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 13:38:23.200629  301425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.200698  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.203884  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 13:38:23.261424  301425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 13:38:23.261500  301425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.261528  301425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 13:38:23.261561  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.261569  301425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.261610  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.267971  301425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 13:38:23.268018  301425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.268075  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317322  301425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 13:38:23.317369  301425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.317387  301425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 13:38:23.317422  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317441  301425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.317440  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.317489  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317507  301425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 13:38:23.317530  301425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 13:38:23.317551  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.317588  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.317553  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317683  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.322770  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.432764  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 13:38:23.432833  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 13:38:23.432877  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.442661  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 13:38:23.442741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 13:38:23.442785  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 13:38:23.442825  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 13:38:23.481401  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 13:38:23.484727  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 13:38:24.057020  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:24.203622  301425 cache_images.go:92] duration metric: took 1.282232497s to LoadCachedImages
	W0729 13:38:24.203724  301425 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 13:38:24.203742  301425 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.20.0 crio true true} ...
	I0729 13:38:24.203883  301425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-924039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:24.203996  301425 ssh_runner.go:195] Run: crio config
	I0729 13:38:24.274480  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:38:24.274531  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:24.274547  301425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:24.274582  301425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-924039 NodeName:old-k8s-version-924039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 13:38:24.274784  301425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-924039"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:24.274863  301425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 13:38:24.285241  301425 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:24.285333  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:24.294677  301425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0729 13:38:24.311572  301425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:24.328768  301425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 13:38:24.346849  301425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:24.351047  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:24.364302  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:24.502947  301425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:24.524583  301425 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039 for IP: 192.168.39.227
	I0729 13:38:24.524610  301425 certs.go:194] generating shared ca certs ...
	I0729 13:38:24.524626  301425 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:24.524831  301425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:24.524889  301425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:24.524908  301425 certs.go:256] generating profile certs ...
	I0729 13:38:24.525030  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.key
	I0729 13:38:24.525090  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b
	I0729 13:38:24.525143  301425 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key
	I0729 13:38:24.525300  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:24.525345  301425 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:24.525359  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:24.525390  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:24.525416  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:24.525440  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:24.525495  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:24.526416  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:24.593901  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:24.641443  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:24.679927  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:24.740839  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 13:38:24.779899  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:38:24.814327  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:24.842166  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:38:24.868619  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:24.894053  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:24.921437  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:24.947676  301425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:24.966469  301425 ssh_runner.go:195] Run: openssl version
	I0729 13:38:24.972780  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:24.985676  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990293  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990356  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.996523  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:25.007631  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:25.018369  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022779  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022840  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.028471  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:25.039307  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:25.050190  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054731  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054799  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.060568  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:25.071531  301425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:25.076195  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:25.082194  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:25.088573  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:25.095625  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:25.101900  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:25.107797  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:25.113775  301425 kubeadm.go:392] StartCluster: {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:25.113903  301425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:25.113975  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.159804  301425 cri.go:89] found id: ""
	I0729 13:38:25.159887  301425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:25.172248  301425 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:25.172271  301425 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:25.172321  301425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:25.182852  301425 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:25.184249  301425 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:25.186246  301425 kubeconfig.go:62] /home/jenkins/minikube-integration/19341-233093/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-924039" cluster setting kubeconfig missing "old-k8s-version-924039" context setting]
	I0729 13:38:25.188334  301425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:25.262355  301425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:25.274019  301425 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0729 13:38:25.274063  301425 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:25.274078  301425 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:25.274141  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.311295  301425 cri.go:89] found id: ""
	I0729 13:38:25.311365  301425 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:25.330380  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:25.343607  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:25.343651  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:25.343709  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:25.356979  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:25.357048  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:25.370453  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:25.386234  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:25.386308  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:25.403905  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.413906  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:25.414011  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.431532  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:25.448250  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:25.448325  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:25.459773  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:25.469841  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:25.584845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.367294  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.618571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.775377  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.860948  301425 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:26.861038  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.361227  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.362003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.861172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.361165  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.861469  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.361306  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.861442  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:31.361866  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:31.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.361776  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.862004  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.361883  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.862010  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.362013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.861958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.361390  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.861465  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:36.362042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:36.862022  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.361208  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.862020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.362115  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.861360  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.362077  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.861478  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.361278  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.861920  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:41.361613  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:41.861155  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.361524  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.862047  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.361778  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.862055  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.861737  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.361194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.862019  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:46.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:46.862046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.362045  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.361183  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.862026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.361204  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.861490  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.361635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.861519  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:51.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:51.861510  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.362026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.861182  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.361850  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.861931  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.362035  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.861192  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.361173  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.862018  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:56.361740  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:56.862033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.362084  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.861406  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.861194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.361788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.861962  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.362043  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.862000  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:01.362213  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:01.861107  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.361767  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.861151  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.361607  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.862013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.362032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.861858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.361611  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.862037  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:06.362002  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:06.861635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.361659  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.862061  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.862083  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.361356  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.861763  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.361420  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.861822  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:11.362046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:11.861909  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.861834  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.361461  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.861666  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.861830  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.361141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.862003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:16.361731  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:16.862014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.361702  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.862141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.361808  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.361104  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.861123  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.361276  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.861176  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:21.362052  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:21.861150  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.361802  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.861996  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.362106  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.861135  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.361998  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.862048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.361848  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.861813  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:26.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:26.861651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:26.861733  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:26.904275  301425 cri.go:89] found id: ""
	I0729 13:39:26.904307  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.904315  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:26.904322  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:26.904387  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:26.946925  301425 cri.go:89] found id: ""
	I0729 13:39:26.946954  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.946966  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:26.946973  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:26.947036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:26.979236  301425 cri.go:89] found id: ""
	I0729 13:39:26.979267  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.979276  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:26.979282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:26.979330  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:27.022185  301425 cri.go:89] found id: ""
	I0729 13:39:27.022212  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.022220  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:27.022226  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:27.022277  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:27.055228  301425 cri.go:89] found id: ""
	I0729 13:39:27.055256  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.055266  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:27.055274  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:27.055335  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:27.088885  301425 cri.go:89] found id: ""
	I0729 13:39:27.088918  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.088926  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:27.088933  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:27.088986  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:27.123861  301425 cri.go:89] found id: ""
	I0729 13:39:27.123893  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.123902  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:27.123915  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:27.123967  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:27.157921  301425 cri.go:89] found id: ""
	I0729 13:39:27.157956  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.157964  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:27.157988  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:27.158003  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.222447  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:27.222489  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:27.265646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:27.265680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:27.317344  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:27.317388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:27.333664  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:27.333689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:27.460502  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:29.960703  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:29.974159  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:29.974235  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:30.009701  301425 cri.go:89] found id: ""
	I0729 13:39:30.009740  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.009753  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:30.009761  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:30.009822  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:30.045806  301425 cri.go:89] found id: ""
	I0729 13:39:30.045841  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.045853  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:30.045860  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:30.045924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:30.078709  301425 cri.go:89] found id: ""
	I0729 13:39:30.078738  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.078747  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:30.078753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:30.078808  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:30.112884  301425 cri.go:89] found id: ""
	I0729 13:39:30.112920  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.112932  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:30.112943  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:30.113012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:30.148160  301425 cri.go:89] found id: ""
	I0729 13:39:30.148196  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.148208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:30.148217  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:30.148285  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:30.186939  301425 cri.go:89] found id: ""
	I0729 13:39:30.186967  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.186975  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:30.186981  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:30.187039  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:30.241888  301425 cri.go:89] found id: ""
	I0729 13:39:30.241915  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.241926  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:30.241934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:30.242009  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:30.281482  301425 cri.go:89] found id: ""
	I0729 13:39:30.281510  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.281518  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:30.281527  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:30.281540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:30.321688  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:30.321730  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:30.378464  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:30.378508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:30.394109  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:30.394150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:30.474077  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:30.474101  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:30.474118  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:33.046016  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:33.059705  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:33.059795  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:33.096521  301425 cri.go:89] found id: ""
	I0729 13:39:33.096549  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.096557  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:33.096564  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:33.096621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:33.131262  301425 cri.go:89] found id: ""
	I0729 13:39:33.131295  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.131307  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:33.131314  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:33.131378  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:33.168889  301425 cri.go:89] found id: ""
	I0729 13:39:33.168915  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.168925  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:33.168932  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:33.168994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:33.205513  301425 cri.go:89] found id: ""
	I0729 13:39:33.205547  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.205558  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:33.205567  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:33.205644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:33.247051  301425 cri.go:89] found id: ""
	I0729 13:39:33.247079  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.247087  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:33.247093  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:33.247149  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:33.279541  301425 cri.go:89] found id: ""
	I0729 13:39:33.279575  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.279587  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:33.279596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:33.279659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:33.314000  301425 cri.go:89] found id: ""
	I0729 13:39:33.314034  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.314046  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:33.314054  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:33.314117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:33.351363  301425 cri.go:89] found id: ""
	I0729 13:39:33.351390  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.351401  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:33.351412  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:33.351437  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:33.413509  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:33.413547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:33.428128  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:33.428165  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:33.495430  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:33.495461  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:33.495478  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:33.574060  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:33.574098  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:36.113561  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:36.126899  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:36.126965  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:36.163363  301425 cri.go:89] found id: ""
	I0729 13:39:36.163396  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.163407  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:36.163414  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:36.163473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:36.205215  301425 cri.go:89] found id: ""
	I0729 13:39:36.205243  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.205259  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:36.205267  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:36.205331  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:36.243166  301425 cri.go:89] found id: ""
	I0729 13:39:36.243220  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.243231  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:36.243239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:36.243295  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:36.280804  301425 cri.go:89] found id: ""
	I0729 13:39:36.280836  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.280845  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:36.280852  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:36.280903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:36.317291  301425 cri.go:89] found id: ""
	I0729 13:39:36.317320  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.317330  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:36.317337  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:36.317399  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:36.358111  301425 cri.go:89] found id: ""
	I0729 13:39:36.358145  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.358156  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:36.358164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:36.358229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:36.399407  301425 cri.go:89] found id: ""
	I0729 13:39:36.399440  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.399451  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:36.399459  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:36.399525  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:36.437876  301425 cri.go:89] found id: ""
	I0729 13:39:36.437904  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.437914  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:36.437926  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:36.437942  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:36.514464  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:36.514493  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:36.514511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:36.592036  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:36.592083  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:36.647650  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:36.647691  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:36.706890  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:36.706935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.226070  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:39.239313  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:39.239373  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:39.274158  301425 cri.go:89] found id: ""
	I0729 13:39:39.274191  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.274202  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:39.274210  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:39.274286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:39.308448  301425 cri.go:89] found id: ""
	I0729 13:39:39.308484  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.308492  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:39.308499  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:39.308563  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:39.347745  301425 cri.go:89] found id: ""
	I0729 13:39:39.347782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.347791  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:39.347798  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:39.347856  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:39.380649  301425 cri.go:89] found id: ""
	I0729 13:39:39.380679  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.380688  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:39.380696  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:39.380767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:39.415076  301425 cri.go:89] found id: ""
	I0729 13:39:39.415107  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.415115  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:39.415120  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:39.415170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:39.450749  301425 cri.go:89] found id: ""
	I0729 13:39:39.450782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.450793  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:39.450801  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:39.450864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:39.482148  301425 cri.go:89] found id: ""
	I0729 13:39:39.482175  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.482184  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:39.482190  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:39.482239  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:39.518558  301425 cri.go:89] found id: ""
	I0729 13:39:39.518588  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.518597  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:39.518608  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:39.518622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:39.555753  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:39.555786  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:39.606627  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:39.606661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.620359  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:39.620388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:39.690685  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:39.690711  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:39.690728  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:42.271925  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:42.284365  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:42.284447  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:42.318966  301425 cri.go:89] found id: ""
	I0729 13:39:42.318998  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.319020  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:42.319028  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:42.319111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:42.354811  301425 cri.go:89] found id: ""
	I0729 13:39:42.354840  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.354854  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:42.354862  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:42.354917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:42.402524  301425 cri.go:89] found id: ""
	I0729 13:39:42.402557  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.402569  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:42.402577  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:42.402643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:42.460954  301425 cri.go:89] found id: ""
	I0729 13:39:42.460984  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.461001  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:42.461010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:42.461063  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:42.516849  301425 cri.go:89] found id: ""
	I0729 13:39:42.516880  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.516890  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:42.516898  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:42.516963  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:42.560289  301425 cri.go:89] found id: ""
	I0729 13:39:42.560316  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.560325  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:42.560332  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:42.560397  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:42.597798  301425 cri.go:89] found id: ""
	I0729 13:39:42.597829  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.597839  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:42.597847  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:42.597912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:42.633015  301425 cri.go:89] found id: ""
	I0729 13:39:42.633043  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.633059  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:42.633068  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:42.633080  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:42.711103  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:42.711126  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:42.711141  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:42.787459  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:42.787499  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:42.828965  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:42.829002  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:42.881702  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:42.881740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:45.396462  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:45.410766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:45.410859  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:45.445886  301425 cri.go:89] found id: ""
	I0729 13:39:45.445931  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.445943  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:45.445960  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:45.446023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:45.484293  301425 cri.go:89] found id: ""
	I0729 13:39:45.484326  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.484338  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:45.484346  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:45.484410  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:45.520209  301425 cri.go:89] found id: ""
	I0729 13:39:45.520237  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.520246  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:45.520252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:45.520300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:45.555671  301425 cri.go:89] found id: ""
	I0729 13:39:45.555702  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.555711  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:45.555717  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:45.555767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:45.594578  301425 cri.go:89] found id: ""
	I0729 13:39:45.594609  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.594618  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:45.594624  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:45.594685  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:45.631777  301425 cri.go:89] found id: ""
	I0729 13:39:45.631805  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.631817  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:45.631825  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:45.631881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:45.667163  301425 cri.go:89] found id: ""
	I0729 13:39:45.667189  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.667197  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:45.667203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:45.667258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:45.703393  301425 cri.go:89] found id: ""
	I0729 13:39:45.703434  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.703443  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:45.703454  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:45.703488  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:45.774424  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:45.774452  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:45.774472  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:45.857529  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:45.857586  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:45.899737  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:45.899775  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:45.952640  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:45.952685  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:48.467705  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:48.482292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:48.482380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:48.520146  301425 cri.go:89] found id: ""
	I0729 13:39:48.520181  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.520195  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:48.520204  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:48.520282  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:48.552623  301425 cri.go:89] found id: ""
	I0729 13:39:48.552654  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.552665  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:48.552672  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:48.552734  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:48.587254  301425 cri.go:89] found id: ""
	I0729 13:39:48.587290  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.587303  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:48.587309  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:48.587368  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:48.621045  301425 cri.go:89] found id: ""
	I0729 13:39:48.621076  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.621088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:48.621096  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:48.621160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:48.654117  301425 cri.go:89] found id: ""
	I0729 13:39:48.654151  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.654163  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:48.654171  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:48.654236  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:48.693108  301425 cri.go:89] found id: ""
	I0729 13:39:48.693149  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.693166  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:48.693173  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:48.693225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:48.733000  301425 cri.go:89] found id: ""
	I0729 13:39:48.733025  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.733033  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:48.733039  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:48.733088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:48.773761  301425 cri.go:89] found id: ""
	I0729 13:39:48.773789  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.773798  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:48.773807  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:48.773822  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:48.826655  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:48.826683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:48.840335  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:48.840364  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:48.913727  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:48.913754  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:48.913774  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:48.990196  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:48.990235  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:51.533333  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:51.547115  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:51.547175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:51.583247  301425 cri.go:89] found id: ""
	I0729 13:39:51.583284  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.583292  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:51.583297  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:51.583350  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:51.618925  301425 cri.go:89] found id: ""
	I0729 13:39:51.618958  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.618969  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:51.618977  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:51.619036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:51.657099  301425 cri.go:89] found id: ""
	I0729 13:39:51.657132  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.657144  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:51.657151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:51.657210  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:51.695413  301425 cri.go:89] found id: ""
	I0729 13:39:51.695459  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.695471  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:51.695480  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:51.695553  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:51.731153  301425 cri.go:89] found id: ""
	I0729 13:39:51.731186  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.731198  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:51.731206  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:51.731271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:51.765662  301425 cri.go:89] found id: ""
	I0729 13:39:51.765716  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.765730  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:51.765740  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:51.765807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:51.800442  301425 cri.go:89] found id: ""
	I0729 13:39:51.800480  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.800491  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:51.800500  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:51.800562  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:51.844516  301425 cri.go:89] found id: ""
	I0729 13:39:51.844542  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.844551  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:51.844562  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:51.844580  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:51.896139  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:51.896176  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:51.910479  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:51.910511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:51.980025  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:51.980052  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:51.980071  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:52.054674  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:52.054717  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.596468  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:54.612233  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:54.612344  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:54.653506  301425 cri.go:89] found id: ""
	I0729 13:39:54.653547  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.653558  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:54.653565  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:54.653624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:54.696964  301425 cri.go:89] found id: ""
	I0729 13:39:54.697002  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.697015  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:54.697023  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:54.697088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:54.731165  301425 cri.go:89] found id: ""
	I0729 13:39:54.731196  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.731207  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:54.731214  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:54.731279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:54.774397  301425 cri.go:89] found id: ""
	I0729 13:39:54.774426  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.774437  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:54.774444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:54.774506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:54.813365  301425 cri.go:89] found id: ""
	I0729 13:39:54.813396  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.813408  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:54.813414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:54.813480  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:54.849936  301425 cri.go:89] found id: ""
	I0729 13:39:54.849962  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.849970  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:54.849980  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:54.850042  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:54.883979  301425 cri.go:89] found id: ""
	I0729 13:39:54.884007  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.884015  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:54.884021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:54.884087  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:54.919754  301425 cri.go:89] found id: ""
	I0729 13:39:54.919779  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.919787  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:54.919796  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:54.919817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:54.973082  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:54.973117  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:54.986534  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:54.986571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:55.055473  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:55.055499  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:55.055514  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:55.138278  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:55.138322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:57.683818  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:57.698992  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:57.699070  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:57.742071  301425 cri.go:89] found id: ""
	I0729 13:39:57.742103  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.742113  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:57.742121  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:57.742185  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:57.777871  301425 cri.go:89] found id: ""
	I0729 13:39:57.777902  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.777911  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:57.777918  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:57.777975  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:57.817767  301425 cri.go:89] found id: ""
	I0729 13:39:57.817798  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.817809  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:57.817817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:57.817889  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:57.855608  301425 cri.go:89] found id: ""
	I0729 13:39:57.855634  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.855644  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:57.855651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:57.855714  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:57.891219  301425 cri.go:89] found id: ""
	I0729 13:39:57.891248  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.891258  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:57.891266  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:57.891336  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:57.926000  301425 cri.go:89] found id: ""
	I0729 13:39:57.926034  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.926045  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:57.926053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:57.926116  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:57.964935  301425 cri.go:89] found id: ""
	I0729 13:39:57.964962  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.964978  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:57.964985  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:57.965051  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:58.001363  301425 cri.go:89] found id: ""
	I0729 13:39:58.001393  301425 logs.go:276] 0 containers: []
	W0729 13:39:58.001405  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:58.001417  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:58.001434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:58.057551  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:58.057598  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:58.072162  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:58.072200  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:58.140533  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:58.140565  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:58.140582  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:58.227285  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:58.227330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:00.769075  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:00.783394  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:00.783471  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:00.831260  301425 cri.go:89] found id: ""
	I0729 13:40:00.831291  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.831301  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:00.831309  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:00.831370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:00.870017  301425 cri.go:89] found id: ""
	I0729 13:40:00.870045  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.870057  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:00.870065  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:00.870127  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:00.904691  301425 cri.go:89] found id: ""
	I0729 13:40:00.904728  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.904740  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:00.904748  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:00.904828  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:00.937221  301425 cri.go:89] found id: ""
	I0729 13:40:00.937249  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.937259  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:00.937265  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:00.937329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:00.977961  301425 cri.go:89] found id: ""
	I0729 13:40:00.977991  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.978002  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:00.978010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:00.978104  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:01.014239  301425 cri.go:89] found id: ""
	I0729 13:40:01.014271  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.014283  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:01.014292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:01.014362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:01.050583  301425 cri.go:89] found id: ""
	I0729 13:40:01.050615  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.050630  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:01.050637  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:01.050696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:01.091599  301425 cri.go:89] found id: ""
	I0729 13:40:01.091624  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.091634  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:01.091643  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:01.091661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:01.146404  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:01.146445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:01.160327  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:01.160358  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:01.237120  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:01.237147  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:01.237162  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:01.321539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:01.321590  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:03.865268  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:03.879648  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:03.879724  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:03.915303  301425 cri.go:89] found id: ""
	I0729 13:40:03.915329  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.915338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:03.915344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:03.915403  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:03.951982  301425 cri.go:89] found id: ""
	I0729 13:40:03.952014  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.952023  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:03.952032  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:03.952099  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:03.989751  301425 cri.go:89] found id: ""
	I0729 13:40:03.989785  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.989796  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:03.989804  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:03.989870  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:04.026934  301425 cri.go:89] found id: ""
	I0729 13:40:04.026975  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.026988  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:04.026996  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:04.027059  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:04.064135  301425 cri.go:89] found id: ""
	I0729 13:40:04.064165  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.064175  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:04.064187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:04.064256  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:04.103080  301425 cri.go:89] found id: ""
	I0729 13:40:04.103108  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.103117  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:04.103123  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:04.103172  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:04.143370  301425 cri.go:89] found id: ""
	I0729 13:40:04.143403  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.143414  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:04.143422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:04.143491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:04.179251  301425 cri.go:89] found id: ""
	I0729 13:40:04.179286  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.179298  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:04.179311  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:04.179330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:04.261058  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:04.261089  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:04.261111  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:04.342897  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:04.342935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:04.391504  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:04.391532  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:04.443064  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:04.443106  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:06.959346  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:06.974377  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:06.974444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:07.007797  301425 cri.go:89] found id: ""
	I0729 13:40:07.007834  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.007847  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:07.007856  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:07.007924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:07.042707  301425 cri.go:89] found id: ""
	I0729 13:40:07.042741  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.042749  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:07.042755  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:07.042807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:07.080150  301425 cri.go:89] found id: ""
	I0729 13:40:07.080185  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.080196  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:07.080203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:07.080268  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:07.115740  301425 cri.go:89] found id: ""
	I0729 13:40:07.115777  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.115788  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:07.115796  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:07.115888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:07.154110  301425 cri.go:89] found id: ""
	I0729 13:40:07.154141  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.154151  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:07.154158  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:07.154225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:07.190819  301425 cri.go:89] found id: ""
	I0729 13:40:07.190850  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.190858  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:07.190865  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:07.190917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:07.231530  301425 cri.go:89] found id: ""
	I0729 13:40:07.231560  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.231571  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:07.231579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:07.231643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:07.272211  301425 cri.go:89] found id: ""
	I0729 13:40:07.272240  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.272247  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:07.272257  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:07.272269  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.326673  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:07.326704  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:07.341255  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:07.341282  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:07.409850  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:07.409878  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:07.409895  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:07.493105  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:07.493169  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.033906  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:10.047938  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:10.048018  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:10.084224  301425 cri.go:89] found id: ""
	I0729 13:40:10.084251  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.084259  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:10.084265  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:10.084316  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:10.120362  301425 cri.go:89] found id: ""
	I0729 13:40:10.120398  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.120409  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:10.120417  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:10.120484  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:10.154128  301425 cri.go:89] found id: ""
	I0729 13:40:10.154160  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.154170  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:10.154178  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:10.154243  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:10.189539  301425 cri.go:89] found id: ""
	I0729 13:40:10.189574  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.189588  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:10.189596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:10.189661  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:10.228821  301425 cri.go:89] found id: ""
	I0729 13:40:10.228855  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.228867  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:10.228875  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:10.228950  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:10.274726  301425 cri.go:89] found id: ""
	I0729 13:40:10.274758  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.274769  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:10.274776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:10.274845  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:10.308910  301425 cri.go:89] found id: ""
	I0729 13:40:10.308945  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.308956  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:10.308964  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:10.309030  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:10.346008  301425 cri.go:89] found id: ""
	I0729 13:40:10.346044  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.346056  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:10.346069  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:10.346091  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:10.360541  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:10.360581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:10.433763  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:10.433788  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:10.433802  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:10.520366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:10.520418  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.561482  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:10.561512  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:13.114858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:13.128348  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:13.128425  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:13.165329  301425 cri.go:89] found id: ""
	I0729 13:40:13.165359  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.165370  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:13.165377  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:13.165441  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:13.200104  301425 cri.go:89] found id: ""
	I0729 13:40:13.200135  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.200148  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:13.200155  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:13.200224  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:13.238632  301425 cri.go:89] found id: ""
	I0729 13:40:13.238680  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.238688  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:13.238694  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:13.238748  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:13.270859  301425 cri.go:89] found id: ""
	I0729 13:40:13.270892  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.270901  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:13.270907  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:13.270976  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:13.308346  301425 cri.go:89] found id: ""
	I0729 13:40:13.308378  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.308386  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:13.308392  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:13.308444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:13.346286  301425 cri.go:89] found id: ""
	I0729 13:40:13.346319  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.346331  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:13.346339  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:13.346412  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:13.383699  301425 cri.go:89] found id: ""
	I0729 13:40:13.383736  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.383769  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:13.383791  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:13.383850  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:13.419958  301425 cri.go:89] found id: ""
	I0729 13:40:13.420045  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.420058  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:13.420071  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:13.420094  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:13.473984  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:13.474028  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:13.488376  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:13.488410  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:13.559515  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:13.559543  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:13.559560  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:13.640528  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:13.640570  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:16.189581  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:16.203962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:16.204052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:16.240537  301425 cri.go:89] found id: ""
	I0729 13:40:16.240572  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.240583  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:16.240591  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:16.240659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:16.277060  301425 cri.go:89] found id: ""
	I0729 13:40:16.277099  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.277112  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:16.277123  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:16.277200  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:16.313839  301425 cri.go:89] found id: ""
	I0729 13:40:16.313869  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.313878  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:16.313884  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:16.313935  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:16.351806  301425 cri.go:89] found id: ""
	I0729 13:40:16.351840  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.351850  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:16.351858  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:16.351922  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:16.387122  301425 cri.go:89] found id: ""
	I0729 13:40:16.387158  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.387169  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:16.387176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:16.387242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:16.424180  301425 cri.go:89] found id: ""
	I0729 13:40:16.424209  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.424220  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:16.424229  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:16.424292  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:16.461827  301425 cri.go:89] found id: ""
	I0729 13:40:16.461865  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.461879  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:16.461889  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:16.461946  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:16.510198  301425 cri.go:89] found id: ""
	I0729 13:40:16.510230  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.510238  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:16.510248  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:16.510264  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:16.585378  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:16.585420  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:16.629304  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:16.629337  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:16.682386  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:16.682434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:16.698405  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:16.698436  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:16.770281  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.270551  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:19.284543  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:19.284617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:19.325194  301425 cri.go:89] found id: ""
	I0729 13:40:19.325221  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.325231  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:19.325238  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:19.325298  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:19.362007  301425 cri.go:89] found id: ""
	I0729 13:40:19.362038  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.362058  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:19.362066  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:19.362196  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:19.401162  301425 cri.go:89] found id: ""
	I0729 13:40:19.401191  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.401202  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:19.401210  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:19.401274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:19.434652  301425 cri.go:89] found id: ""
	I0729 13:40:19.434689  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.434700  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:19.434709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:19.434774  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:19.470116  301425 cri.go:89] found id: ""
	I0729 13:40:19.470149  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.470157  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:19.470164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:19.470218  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:19.503593  301425 cri.go:89] found id: ""
	I0729 13:40:19.503621  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.503629  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:19.503635  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:19.503696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:19.546127  301425 cri.go:89] found id: ""
	I0729 13:40:19.546155  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.546164  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:19.546169  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:19.546217  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:19.584600  301425 cri.go:89] found id: ""
	I0729 13:40:19.584639  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.584650  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:19.584663  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:19.584681  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:19.599411  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:19.599446  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:19.665811  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.665836  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:19.665853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:19.747295  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:19.747339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:19.790476  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:19.790516  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:22.346725  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:22.361349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:22.361443  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:22.394840  301425 cri.go:89] found id: ""
	I0729 13:40:22.394870  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.394881  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:22.394889  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:22.394956  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:22.429328  301425 cri.go:89] found id: ""
	I0729 13:40:22.429356  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.429364  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:22.429370  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:22.429431  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:22.463179  301425 cri.go:89] found id: ""
	I0729 13:40:22.463206  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.463214  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:22.463220  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:22.463291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:22.497527  301425 cri.go:89] found id: ""
	I0729 13:40:22.497557  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.497565  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:22.497571  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:22.497627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:22.537607  301425 cri.go:89] found id: ""
	I0729 13:40:22.537635  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.537646  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:22.537654  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:22.537718  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:22.580658  301425 cri.go:89] found id: ""
	I0729 13:40:22.580689  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.580701  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:22.580709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:22.580775  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:22.622229  301425 cri.go:89] found id: ""
	I0729 13:40:22.622261  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.622270  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:22.622282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:22.622346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:22.660091  301425 cri.go:89] found id: ""
	I0729 13:40:22.660120  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.660129  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:22.660139  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:22.660153  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:22.715053  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:22.715090  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:22.728865  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:22.728898  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:22.805760  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:22.805785  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:22.805799  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:22.890915  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:22.890960  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:25.457272  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:25.471002  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:25.471088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:25.506190  301425 cri.go:89] found id: ""
	I0729 13:40:25.506226  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.506237  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:25.506244  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:25.506297  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:25.540957  301425 cri.go:89] found id: ""
	I0729 13:40:25.540991  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.541002  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:25.541011  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:25.541074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:25.578378  301425 cri.go:89] found id: ""
	I0729 13:40:25.578424  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.578440  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:25.578448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:25.578518  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:25.620930  301425 cri.go:89] found id: ""
	I0729 13:40:25.620962  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.620979  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:25.620987  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:25.621056  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:25.655558  301425 cri.go:89] found id: ""
	I0729 13:40:25.655589  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.655597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:25.655604  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:25.655670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:25.688810  301425 cri.go:89] found id: ""
	I0729 13:40:25.688845  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.688855  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:25.688863  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:25.688930  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:25.724384  301425 cri.go:89] found id: ""
	I0729 13:40:25.724416  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.724428  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:25.724435  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:25.724514  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:25.763174  301425 cri.go:89] found id: ""
	I0729 13:40:25.763200  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.763209  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:25.763219  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:25.763232  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:25.818517  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:25.818569  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:25.833939  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:25.833973  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:25.910487  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:25.910515  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:25.910537  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:25.993887  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:25.993929  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:28.536843  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:28.550097  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:28.550175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:28.592664  301425 cri.go:89] found id: ""
	I0729 13:40:28.592697  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.592709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:28.592716  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:28.592788  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:28.638299  301425 cri.go:89] found id: ""
	I0729 13:40:28.638329  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.638337  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:28.638343  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:28.638395  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:28.682410  301425 cri.go:89] found id: ""
	I0729 13:40:28.682437  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.682446  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:28.682452  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:28.682511  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:28.719402  301425 cri.go:89] found id: ""
	I0729 13:40:28.719430  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.719438  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:28.719444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:28.719504  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:28.767515  301425 cri.go:89] found id: ""
	I0729 13:40:28.767547  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.767559  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:28.767568  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:28.767633  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:28.811600  301425 cri.go:89] found id: ""
	I0729 13:40:28.811632  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.811644  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:28.811652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:28.811727  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:28.853364  301425 cri.go:89] found id: ""
	I0729 13:40:28.853397  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.853407  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:28.853414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:28.853486  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:28.890981  301425 cri.go:89] found id: ""
	I0729 13:40:28.891013  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.891024  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:28.891035  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:28.891050  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:28.944174  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:28.944213  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:28.957724  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:28.957755  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:29.026457  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:29.026479  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:29.026497  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:29.105366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:29.105415  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:31.649374  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:31.663432  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:31.663512  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:31.702047  301425 cri.go:89] found id: ""
	I0729 13:40:31.702080  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.702088  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:31.702098  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:31.702162  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:31.738484  301425 cri.go:89] found id: ""
	I0729 13:40:31.738510  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.738518  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:31.738524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:31.738583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:31.774214  301425 cri.go:89] found id: ""
	I0729 13:40:31.774249  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.774261  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:31.774270  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:31.774339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:31.810263  301425 cri.go:89] found id: ""
	I0729 13:40:31.810293  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.810302  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:31.810307  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:31.810369  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:31.848124  301425 cri.go:89] found id: ""
	I0729 13:40:31.848153  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.848160  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:31.848167  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:31.848234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:31.885531  301425 cri.go:89] found id: ""
	I0729 13:40:31.885561  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.885571  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:31.885580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:31.885650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:31.923904  301425 cri.go:89] found id: ""
	I0729 13:40:31.923939  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.923952  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:31.923959  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:31.924029  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:31.957165  301425 cri.go:89] found id: ""
	I0729 13:40:31.957202  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.957213  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:31.957228  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:31.957248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:32.039221  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:32.039262  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.078191  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:32.078229  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:32.131871  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:32.131922  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:32.146676  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:32.146706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:32.223849  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:34.724927  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:34.739029  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:34.739113  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:34.774627  301425 cri.go:89] found id: ""
	I0729 13:40:34.774660  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.774669  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:34.774675  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:34.774743  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:34.809840  301425 cri.go:89] found id: ""
	I0729 13:40:34.809872  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.809882  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:34.809887  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:34.809940  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:34.847530  301425 cri.go:89] found id: ""
	I0729 13:40:34.847561  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.847572  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:34.847580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:34.847648  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:34.881828  301425 cri.go:89] found id: ""
	I0729 13:40:34.881856  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.881870  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:34.881876  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:34.881937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:34.918903  301425 cri.go:89] found id: ""
	I0729 13:40:34.918937  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.918949  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:34.918956  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:34.919015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:34.954714  301425 cri.go:89] found id: ""
	I0729 13:40:34.954749  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.954761  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:34.954770  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:34.954825  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:34.993433  301425 cri.go:89] found id: ""
	I0729 13:40:34.993463  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.993472  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:34.993478  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:34.993531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:35.033830  301425 cri.go:89] found id: ""
	I0729 13:40:35.033859  301425 logs.go:276] 0 containers: []
	W0729 13:40:35.033874  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:35.033884  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:35.033900  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:35.084546  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:35.084595  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:35.098807  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:35.098845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:35.182636  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:35.182662  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:35.182674  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:35.262767  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:35.262808  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:37.802033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:37.815633  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:37.815697  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:37.857522  301425 cri.go:89] found id: ""
	I0729 13:40:37.857552  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.857563  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:37.857571  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:37.857627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:37.897527  301425 cri.go:89] found id: ""
	I0729 13:40:37.897564  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.897575  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:37.897583  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:37.897649  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.937135  301425 cri.go:89] found id: ""
	I0729 13:40:37.937167  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.937176  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:37.937189  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:37.937255  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:37.972699  301425 cri.go:89] found id: ""
	I0729 13:40:37.972734  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.972751  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:37.972761  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:37.972933  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:38.012702  301425 cri.go:89] found id: ""
	I0729 13:40:38.012732  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.012740  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:38.012747  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:38.012832  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:38.050228  301425 cri.go:89] found id: ""
	I0729 13:40:38.050260  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.050268  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:38.050275  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:38.050329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:38.084665  301425 cri.go:89] found id: ""
	I0729 13:40:38.084693  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.084707  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:38.084715  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:38.084780  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:38.119155  301425 cri.go:89] found id: ""
	I0729 13:40:38.119200  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.119211  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:38.119222  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:38.119236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:38.170934  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:38.170968  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:38.185298  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:38.185329  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:38.256118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:38.256149  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:38.256166  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:38.337090  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:38.337127  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:40.876177  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:40.889580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:40.889655  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:40.922971  301425 cri.go:89] found id: ""
	I0729 13:40:40.923002  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.923010  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:40.923016  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:40.923074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:40.955840  301425 cri.go:89] found id: ""
	I0729 13:40:40.955872  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.955884  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:40.955891  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:40.955952  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:40.993258  301425 cri.go:89] found id: ""
	I0729 13:40:40.993290  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.993298  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:40.993305  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:40.993357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:41.026370  301425 cri.go:89] found id: ""
	I0729 13:40:41.026398  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.026409  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:41.026416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:41.026473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:41.060538  301425 cri.go:89] found id: ""
	I0729 13:40:41.060565  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.060574  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:41.060579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:41.060630  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:41.105074  301425 cri.go:89] found id: ""
	I0729 13:40:41.105108  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.105118  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:41.105126  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:41.105193  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:41.138254  301425 cri.go:89] found id: ""
	I0729 13:40:41.138280  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.138288  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:41.138294  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:41.138342  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:41.171432  301425 cri.go:89] found id: ""
	I0729 13:40:41.171458  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.171466  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:41.171475  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:41.171487  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:41.184703  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:41.184736  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:41.265356  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:41.265392  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:41.265409  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:41.345939  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:41.345979  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:41.388819  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:41.388852  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:43.940388  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:43.955448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:43.955515  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:43.998457  301425 cri.go:89] found id: ""
	I0729 13:40:43.998494  301425 logs.go:276] 0 containers: []
	W0729 13:40:43.998506  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:43.998515  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:43.998584  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:44.038142  301425 cri.go:89] found id: ""
	I0729 13:40:44.038173  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.038185  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:44.038193  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:44.038260  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:44.077270  301425 cri.go:89] found id: ""
	I0729 13:40:44.077302  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.077313  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:44.077321  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:44.077391  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:44.117612  301425 cri.go:89] found id: ""
	I0729 13:40:44.117641  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.117661  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:44.117681  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:44.117749  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:44.152564  301425 cri.go:89] found id: ""
	I0729 13:40:44.152603  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.152615  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:44.152623  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:44.152683  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:44.188245  301425 cri.go:89] found id: ""
	I0729 13:40:44.188276  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.188288  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:44.188296  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:44.188355  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:44.224947  301425 cri.go:89] found id: ""
	I0729 13:40:44.224975  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.224983  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:44.224989  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:44.225037  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:44.264830  301425 cri.go:89] found id: ""
	I0729 13:40:44.264860  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.264867  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:44.264877  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:44.264893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:44.343145  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:44.343182  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:44.384619  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:44.384650  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:44.438195  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:44.438237  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:44.452115  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:44.452152  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:44.526586  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:47.027726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:47.041174  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:47.041242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:47.079265  301425 cri.go:89] found id: ""
	I0729 13:40:47.079295  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.079304  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:47.079313  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:47.079380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:47.119775  301425 cri.go:89] found id: ""
	I0729 13:40:47.119807  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.119820  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:47.119828  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:47.119904  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:47.155381  301425 cri.go:89] found id: ""
	I0729 13:40:47.155415  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.155426  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:47.155434  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:47.155490  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:47.195071  301425 cri.go:89] found id: ""
	I0729 13:40:47.195103  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.195111  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:47.195117  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:47.195167  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:47.229487  301425 cri.go:89] found id: ""
	I0729 13:40:47.229519  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.229531  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:47.229539  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:47.229611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:47.266159  301425 cri.go:89] found id: ""
	I0729 13:40:47.266190  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.266201  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:47.266209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:47.266269  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:47.300813  301425 cri.go:89] found id: ""
	I0729 13:40:47.300845  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.300854  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:47.300860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:47.300916  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:47.340378  301425 cri.go:89] found id: ""
	I0729 13:40:47.340412  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.340432  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:47.340444  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:47.340464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:47.395403  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:47.395444  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:47.409505  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:47.409539  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:47.481327  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:47.481349  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:47.481365  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:47.560129  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:47.560172  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.105832  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:50.121192  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:50.121264  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:50.160217  301425 cri.go:89] found id: ""
	I0729 13:40:50.160247  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.160256  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:50.160262  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:50.160313  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:50.199952  301425 cri.go:89] found id: ""
	I0729 13:40:50.199986  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.199998  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:50.200005  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:50.200065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:50.240036  301425 cri.go:89] found id: ""
	I0729 13:40:50.240069  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.240076  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:50.240083  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:50.240134  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:50.279761  301425 cri.go:89] found id: ""
	I0729 13:40:50.279788  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.279796  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:50.279802  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:50.279852  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:50.320324  301425 cri.go:89] found id: ""
	I0729 13:40:50.320350  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.320358  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:50.320364  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:50.320423  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:50.356385  301425 cri.go:89] found id: ""
	I0729 13:40:50.356413  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.356421  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:50.356427  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:50.356482  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:50.396866  301425 cri.go:89] found id: ""
	I0729 13:40:50.396900  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.396912  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:50.396919  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:50.397008  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:50.434778  301425 cri.go:89] found id: ""
	I0729 13:40:50.434812  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.434823  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:50.434836  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:50.434853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:50.447746  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:50.447776  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:50.523750  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:50.523772  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:50.523787  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:50.604206  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:50.604255  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.647414  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:50.647449  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:53.201653  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:53.215745  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:53.215814  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:53.250482  301425 cri.go:89] found id: ""
	I0729 13:40:53.250508  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.250516  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:53.250522  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:53.250583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:53.285956  301425 cri.go:89] found id: ""
	I0729 13:40:53.285988  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.285996  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:53.286002  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:53.286055  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:53.320248  301425 cri.go:89] found id: ""
	I0729 13:40:53.320281  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.320292  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:53.320300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:53.320364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:53.355155  301425 cri.go:89] found id: ""
	I0729 13:40:53.355188  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.355200  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:53.355209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:53.355271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:53.389519  301425 cri.go:89] found id: ""
	I0729 13:40:53.389549  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.389557  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:53.389564  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:53.389620  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:53.424391  301425 cri.go:89] found id: ""
	I0729 13:40:53.424419  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.424427  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:53.424433  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:53.424492  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:53.463297  301425 cri.go:89] found id: ""
	I0729 13:40:53.463331  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.463342  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:53.463350  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:53.463433  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:53.497565  301425 cri.go:89] found id: ""
	I0729 13:40:53.497593  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.497601  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:53.497610  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:53.497622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:53.548906  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:53.548948  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:53.562789  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:53.562823  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:53.635656  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:53.635679  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:53.635693  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:53.715973  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:53.716024  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:56.258726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:56.273826  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:56.273905  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:56.310881  301425 cri.go:89] found id: ""
	I0729 13:40:56.310927  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.310936  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:56.310944  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:56.310999  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:56.350104  301425 cri.go:89] found id: ""
	I0729 13:40:56.350139  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.350151  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:56.350158  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:56.350221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:56.385100  301425 cri.go:89] found id: ""
	I0729 13:40:56.385136  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.385145  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:56.385151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:56.385234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:56.421904  301425 cri.go:89] found id: ""
	I0729 13:40:56.421941  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.421953  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:56.421961  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:56.422025  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:56.457366  301425 cri.go:89] found id: ""
	I0729 13:40:56.457403  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.457414  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:56.457422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:56.457491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:56.496700  301425 cri.go:89] found id: ""
	I0729 13:40:56.496732  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.496746  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:56.496755  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:56.496844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:56.532011  301425 cri.go:89] found id: ""
	I0729 13:40:56.532039  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.532047  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:56.532053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:56.532102  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:56.567511  301425 cri.go:89] found id: ""
	I0729 13:40:56.567543  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.567554  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:56.567566  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:56.567581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:56.615875  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:56.615914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:56.629818  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:56.629862  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:56.703255  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:56.703284  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:56.703298  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:56.786466  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:56.786508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:59.328670  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:59.342993  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:59.343061  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:59.378267  301425 cri.go:89] found id: ""
	I0729 13:40:59.378301  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.378313  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:59.378321  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:59.378392  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:59.415637  301425 cri.go:89] found id: ""
	I0729 13:40:59.415669  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.415680  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:59.415687  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:59.415759  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:59.451170  301425 cri.go:89] found id: ""
	I0729 13:40:59.451204  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.451212  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:59.451219  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:59.451275  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:59.485914  301425 cri.go:89] found id: ""
	I0729 13:40:59.485948  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.485960  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:59.485975  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:59.486052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:59.523168  301425 cri.go:89] found id: ""
	I0729 13:40:59.523198  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.523208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:59.523216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:59.523274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:59.557711  301425 cri.go:89] found id: ""
	I0729 13:40:59.557746  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.557758  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:59.557766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:59.557826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:59.593387  301425 cri.go:89] found id: ""
	I0729 13:40:59.593421  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.593434  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:59.593442  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:59.593506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:59.627521  301425 cri.go:89] found id: ""
	I0729 13:40:59.627555  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.627566  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:59.627578  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:59.627597  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:59.677497  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:59.677538  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:59.692116  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:59.692150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:59.759344  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:59.759369  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:59.759382  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:59.840380  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:59.840423  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:02.380718  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:02.394436  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:02.394497  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:02.433283  301425 cri.go:89] found id: ""
	I0729 13:41:02.433313  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.433323  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:02.433332  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:02.433393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:02.467206  301425 cri.go:89] found id: ""
	I0729 13:41:02.467232  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.467241  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:02.467247  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:02.467300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:02.502743  301425 cri.go:89] found id: ""
	I0729 13:41:02.502774  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.502783  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:02.502790  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:02.502844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:02.536415  301425 cri.go:89] found id: ""
	I0729 13:41:02.536449  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.536462  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:02.536470  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:02.536527  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:02.570572  301425 cri.go:89] found id: ""
	I0729 13:41:02.570610  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.570621  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:02.570629  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:02.570702  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:02.606251  301425 cri.go:89] found id: ""
	I0729 13:41:02.606277  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.606285  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:02.606292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:02.606345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:02.644637  301425 cri.go:89] found id: ""
	I0729 13:41:02.644664  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.644675  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:02.644683  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:02.644750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:02.679493  301425 cri.go:89] found id: ""
	I0729 13:41:02.679519  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.679527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:02.679537  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:02.679553  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:02.734865  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:02.734896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:02.787929  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:02.787962  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:02.801317  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:02.801344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:02.867838  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:02.867862  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:02.867877  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:05.451323  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:05.465262  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:05.465338  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:05.499797  301425 cri.go:89] found id: ""
	I0729 13:41:05.499827  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.499837  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:05.499845  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:05.499912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:05.534363  301425 cri.go:89] found id: ""
	I0729 13:41:05.534403  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.534416  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:05.534424  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:05.534483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:05.571366  301425 cri.go:89] found id: ""
	I0729 13:41:05.571397  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.571408  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:05.571416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:05.571481  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:05.611301  301425 cri.go:89] found id: ""
	I0729 13:41:05.611335  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.611346  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:05.611355  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:05.611422  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:05.650698  301425 cri.go:89] found id: ""
	I0729 13:41:05.650738  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.650750  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:05.650758  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:05.650823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:05.686166  301425 cri.go:89] found id: ""
	I0729 13:41:05.686204  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.686216  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:05.686225  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:05.686279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:05.724567  301425 cri.go:89] found id: ""
	I0729 13:41:05.724604  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.724616  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:05.724628  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:05.724691  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:05.760401  301425 cri.go:89] found id: ""
	I0729 13:41:05.760430  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.760438  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:05.760448  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:05.760464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:05.811654  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:05.811698  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:05.827189  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:05.827226  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:05.899612  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:05.899636  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:05.899654  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:05.982384  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:05.982425  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.527609  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:08.542024  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:08.542086  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:08.576313  301425 cri.go:89] found id: ""
	I0729 13:41:08.576340  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.576348  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:08.576354  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:08.576406  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:08.609996  301425 cri.go:89] found id: ""
	I0729 13:41:08.610027  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.610038  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:08.610045  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:08.610111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:08.643722  301425 cri.go:89] found id: ""
	I0729 13:41:08.643750  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.643758  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:08.643765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:08.643815  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:08.679331  301425 cri.go:89] found id: ""
	I0729 13:41:08.679367  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.679378  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:08.679388  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:08.679459  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:08.718348  301425 cri.go:89] found id: ""
	I0729 13:41:08.718376  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.718384  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:08.718390  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:08.718444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:08.758086  301425 cri.go:89] found id: ""
	I0729 13:41:08.758128  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.758140  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:08.758150  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:08.758225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:08.794304  301425 cri.go:89] found id: ""
	I0729 13:41:08.794333  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.794345  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:08.794354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:08.794415  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:08.835448  301425 cri.go:89] found id: ""
	I0729 13:41:08.835477  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.835486  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:08.835495  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:08.835508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:08.923886  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:08.923931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.963921  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:08.963957  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:09.013852  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:09.013893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:09.027838  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:09.027872  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:09.097864  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:11.598762  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:11.612789  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:11.612903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:11.650029  301425 cri.go:89] found id: ""
	I0729 13:41:11.650063  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.650074  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:11.650084  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:11.650152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:11.687479  301425 cri.go:89] found id: ""
	I0729 13:41:11.687510  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.687520  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:11.687527  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:11.687593  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:11.723788  301425 cri.go:89] found id: ""
	I0729 13:41:11.723816  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.723824  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:11.723830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:11.723878  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:11.760304  301425 cri.go:89] found id: ""
	I0729 13:41:11.760341  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.760353  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:11.760361  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:11.760429  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:11.794175  301425 cri.go:89] found id: ""
	I0729 13:41:11.794202  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.794210  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:11.794216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:11.794276  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:11.830653  301425 cri.go:89] found id: ""
	I0729 13:41:11.830679  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.830689  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:11.830697  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:11.830755  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:11.869360  301425 cri.go:89] found id: ""
	I0729 13:41:11.869391  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.869403  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:11.869410  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:11.869473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:11.904164  301425 cri.go:89] found id: ""
	I0729 13:41:11.904195  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.904206  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:11.904218  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:11.904236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:11.979031  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:11.979054  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:11.979069  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:12.064215  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:12.064254  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:12.101854  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:12.101896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:12.152327  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:12.152362  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:14.668032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:14.683118  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:14.683182  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:14.722574  301425 cri.go:89] found id: ""
	I0729 13:41:14.722602  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.722612  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:14.722619  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:14.722686  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:14.759047  301425 cri.go:89] found id: ""
	I0729 13:41:14.759084  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.759094  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:14.759099  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:14.759156  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:14.794363  301425 cri.go:89] found id: ""
	I0729 13:41:14.794400  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.794411  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:14.794418  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:14.794488  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:14.831542  301425 cri.go:89] found id: ""
	I0729 13:41:14.831579  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.831586  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:14.831592  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:14.831650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:14.878710  301425 cri.go:89] found id: ""
	I0729 13:41:14.878745  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.878758  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:14.878765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:14.878824  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:14.937804  301425 cri.go:89] found id: ""
	I0729 13:41:14.937837  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.937847  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:14.937856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:14.937923  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:14.985616  301425 cri.go:89] found id: ""
	I0729 13:41:14.985649  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.985658  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:14.985665  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:14.985737  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:15.023210  301425 cri.go:89] found id: ""
	I0729 13:41:15.023248  301425 logs.go:276] 0 containers: []
	W0729 13:41:15.023261  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:15.023273  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:15.023288  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:15.072549  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:15.072587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:15.086624  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:15.086653  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:15.155391  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:15.155412  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:15.155426  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:15.237480  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:15.237535  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:17.779568  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:17.794163  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:17.794225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:17.831416  301425 cri.go:89] found id: ""
	I0729 13:41:17.831446  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.831456  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:17.831463  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:17.831519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:17.868713  301425 cri.go:89] found id: ""
	I0729 13:41:17.868740  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.868752  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:17.868758  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:17.868834  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:17.913159  301425 cri.go:89] found id: ""
	I0729 13:41:17.913200  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.913211  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:17.913221  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:17.913291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:17.947528  301425 cri.go:89] found id: ""
	I0729 13:41:17.947559  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.947567  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:17.947573  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:17.947693  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:17.982280  301425 cri.go:89] found id: ""
	I0729 13:41:17.982314  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.982323  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:17.982330  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:17.982407  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:18.023729  301425 cri.go:89] found id: ""
	I0729 13:41:18.023767  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.023776  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:18.023783  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:18.023847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:18.061594  301425 cri.go:89] found id: ""
	I0729 13:41:18.061629  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.061637  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:18.061642  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:18.061694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:18.095705  301425 cri.go:89] found id: ""
	I0729 13:41:18.095735  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.095745  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:18.095758  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:18.095778  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:18.175843  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:18.175879  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:18.222979  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:18.223015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:18.277265  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:18.277308  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:18.291002  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:18.291037  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:18.373425  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:20.873958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:20.888091  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:20.888153  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:20.925850  301425 cri.go:89] found id: ""
	I0729 13:41:20.925886  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.925894  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:20.925901  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:20.925955  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:20.962725  301425 cri.go:89] found id: ""
	I0729 13:41:20.962762  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.962774  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:20.962782  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:20.962847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:20.998741  301425 cri.go:89] found id: ""
	I0729 13:41:20.998778  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.998787  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:20.998794  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:20.998842  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:21.036370  301425 cri.go:89] found id: ""
	I0729 13:41:21.036401  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.036410  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:21.036417  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:21.036483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:21.071560  301425 cri.go:89] found id: ""
	I0729 13:41:21.071588  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.071597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:21.071605  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:21.071670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:21.106778  301425 cri.go:89] found id: ""
	I0729 13:41:21.106810  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.106822  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:21.106830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:21.106890  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:21.139901  301425 cri.go:89] found id: ""
	I0729 13:41:21.139926  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.139934  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:21.139940  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:21.140001  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:21.173281  301425 cri.go:89] found id: ""
	I0729 13:41:21.173312  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.173320  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:21.173330  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:21.173344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:21.225055  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:21.225095  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:21.239780  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:21.239864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:21.313460  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:21.313486  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:21.313504  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:21.398557  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:21.398599  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:23.937873  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:23.951595  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:23.951653  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:23.987177  301425 cri.go:89] found id: ""
	I0729 13:41:23.987208  301425 logs.go:276] 0 containers: []
	W0729 13:41:23.987217  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:23.987225  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:23.987324  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:24.030197  301425 cri.go:89] found id: ""
	I0729 13:41:24.030251  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.030264  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:24.030272  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:24.030339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:24.068031  301425 cri.go:89] found id: ""
	I0729 13:41:24.068061  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.068074  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:24.068081  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:24.068154  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:24.107192  301425 cri.go:89] found id: ""
	I0729 13:41:24.107221  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.107232  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:24.107239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:24.107304  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:24.143154  301425 cri.go:89] found id: ""
	I0729 13:41:24.143182  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.143190  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:24.143196  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:24.143248  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:24.181268  301425 cri.go:89] found id: ""
	I0729 13:41:24.181296  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.181304  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:24.181311  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:24.181370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:24.215248  301425 cri.go:89] found id: ""
	I0729 13:41:24.215284  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.215293  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:24.215299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:24.215363  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:24.250796  301425 cri.go:89] found id: ""
	I0729 13:41:24.250822  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.250831  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:24.250841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:24.250853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:24.305841  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:24.305883  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:24.320182  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:24.320214  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:24.389667  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:24.389690  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:24.389707  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:24.471435  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:24.471479  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:27.014508  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:27.029318  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:27.029382  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:27.064115  301425 cri.go:89] found id: ""
	I0729 13:41:27.064150  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.064161  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:27.064169  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:27.064250  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:27.099081  301425 cri.go:89] found id: ""
	I0729 13:41:27.099110  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.099123  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:27.099131  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:27.099197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:27.132475  301425 cri.go:89] found id: ""
	I0729 13:41:27.132506  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.132518  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:27.132527  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:27.132595  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:27.168924  301425 cri.go:89] found id: ""
	I0729 13:41:27.168948  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.168956  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:27.168962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:27.169015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:27.204052  301425 cri.go:89] found id: ""
	I0729 13:41:27.204082  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.204094  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:27.204109  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:27.204170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:27.238355  301425 cri.go:89] found id: ""
	I0729 13:41:27.238383  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.238391  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:27.238397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:27.238496  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:27.276104  301425 cri.go:89] found id: ""
	I0729 13:41:27.276139  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.276150  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:27.276157  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:27.276222  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:27.308612  301425 cri.go:89] found id: ""
	I0729 13:41:27.308643  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.308654  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:27.308667  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:27.308683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:27.362472  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:27.362511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:27.376349  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:27.376383  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:27.458450  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:27.458472  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:27.458486  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:27.536405  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:27.536445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:30.076285  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:30.091308  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:30.091386  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:30.138335  301425 cri.go:89] found id: ""
	I0729 13:41:30.138369  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.138381  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:30.138389  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:30.138454  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:30.176395  301425 cri.go:89] found id: ""
	I0729 13:41:30.176425  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.176435  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:30.176443  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:30.176495  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:30.214990  301425 cri.go:89] found id: ""
	I0729 13:41:30.215027  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.215035  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:30.215041  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:30.215090  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:30.252051  301425 cri.go:89] found id: ""
	I0729 13:41:30.252080  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.252088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:30.252094  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:30.252155  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:30.287210  301425 cri.go:89] found id: ""
	I0729 13:41:30.287240  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.287249  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:30.287254  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:30.287337  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:30.322813  301425 cri.go:89] found id: ""
	I0729 13:41:30.322842  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.322851  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:30.322857  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:30.322924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:30.358697  301425 cri.go:89] found id: ""
	I0729 13:41:30.358730  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.358738  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:30.358744  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:30.358804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:30.394252  301425 cri.go:89] found id: ""
	I0729 13:41:30.394283  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.394294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:30.394305  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:30.394321  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:30.446777  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:30.446820  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:30.461564  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:30.461605  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:30.537918  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:30.537942  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:30.537958  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:30.613821  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:30.613865  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:33.154081  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:33.168252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:33.168353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:33.205675  301425 cri.go:89] found id: ""
	I0729 13:41:33.205708  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.205719  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:33.205727  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:33.205799  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:33.240556  301425 cri.go:89] found id: ""
	I0729 13:41:33.240582  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.240590  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:33.240596  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:33.240644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:33.276662  301425 cri.go:89] found id: ""
	I0729 13:41:33.276690  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.276698  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:33.276704  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:33.276773  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:33.318631  301425 cri.go:89] found id: ""
	I0729 13:41:33.318667  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.318677  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:33.318685  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:33.318762  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:33.354372  301425 cri.go:89] found id: ""
	I0729 13:41:33.354403  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.354412  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:33.354421  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:33.354475  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:33.389309  301425 cri.go:89] found id: ""
	I0729 13:41:33.389337  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.389346  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:33.389352  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:33.389404  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:33.423689  301425 cri.go:89] found id: ""
	I0729 13:41:33.423732  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.423745  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:33.423753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:33.423823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:33.457556  301425 cri.go:89] found id: ""
	I0729 13:41:33.457593  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.457605  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:33.457618  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:33.457634  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:33.534377  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:33.534416  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:33.579646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:33.579689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:33.629784  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:33.629819  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:33.643878  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:33.643912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:33.716446  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:36.216598  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:36.229904  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:36.230003  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:36.263721  301425 cri.go:89] found id: ""
	I0729 13:41:36.263752  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.263771  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:36.263786  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:36.263838  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:36.297900  301425 cri.go:89] found id: ""
	I0729 13:41:36.297932  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.297950  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:36.297958  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:36.298023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:36.338037  301425 cri.go:89] found id: ""
	I0729 13:41:36.338064  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.338072  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:36.338078  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:36.338125  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:36.375334  301425 cri.go:89] found id: ""
	I0729 13:41:36.375362  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.375370  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:36.375375  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:36.375426  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:36.410760  301425 cri.go:89] found id: ""
	I0729 13:41:36.410794  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.410805  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:36.410813  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:36.410888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:36.445247  301425 cri.go:89] found id: ""
	I0729 13:41:36.445280  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.445291  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:36.445300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:36.445364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:36.487183  301425 cri.go:89] found id: ""
	I0729 13:41:36.487214  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.487221  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:36.487228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:36.487301  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:36.522407  301425 cri.go:89] found id: ""
	I0729 13:41:36.522433  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.522442  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:36.522453  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:36.522468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:36.537163  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:36.537197  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:36.608334  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:36.608361  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:36.608376  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:36.689026  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:36.689074  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:36.728580  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:36.728618  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.279605  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:39.293259  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:39.293320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:39.329070  301425 cri.go:89] found id: ""
	I0729 13:41:39.329095  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.329103  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:39.329109  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:39.329160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:39.362992  301425 cri.go:89] found id: ""
	I0729 13:41:39.363023  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.363032  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:39.363038  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:39.363100  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:39.403094  301425 cri.go:89] found id: ""
	I0729 13:41:39.403128  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.403140  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:39.403147  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:39.403201  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:39.435761  301425 cri.go:89] found id: ""
	I0729 13:41:39.435795  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.435806  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:39.435814  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:39.435881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:39.468299  301425 cri.go:89] found id: ""
	I0729 13:41:39.468332  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.468341  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:39.468349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:39.468417  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:39.505114  301425 cri.go:89] found id: ""
	I0729 13:41:39.505149  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.505162  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:39.505172  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:39.505234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:39.536942  301425 cri.go:89] found id: ""
	I0729 13:41:39.536975  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.536986  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:39.536994  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:39.537064  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:39.577394  301425 cri.go:89] found id: ""
	I0729 13:41:39.577427  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.577439  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:39.577451  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:39.577468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.631143  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:39.631184  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:39.645020  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:39.645047  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:39.718256  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:39.718283  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:39.718297  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:39.801990  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:39.802036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:42.347066  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:42.359902  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:42.359983  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:42.395494  301425 cri.go:89] found id: ""
	I0729 13:41:42.395529  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.395540  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:42.395548  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:42.395611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:42.429305  301425 cri.go:89] found id: ""
	I0729 13:41:42.429334  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.429343  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:42.429350  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:42.429401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:42.466902  301425 cri.go:89] found id: ""
	I0729 13:41:42.466931  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.466942  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:42.466949  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:42.467017  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:42.504582  301425 cri.go:89] found id: ""
	I0729 13:41:42.504618  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.504628  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:42.504652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:42.504717  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:42.539649  301425 cri.go:89] found id: ""
	I0729 13:41:42.539676  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.539686  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:42.539695  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:42.539758  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:42.579209  301425 cri.go:89] found id: ""
	I0729 13:41:42.579238  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.579249  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:42.579257  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:42.579320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:42.614832  301425 cri.go:89] found id: ""
	I0729 13:41:42.614861  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.614869  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:42.614874  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:42.614925  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:42.651837  301425 cri.go:89] found id: ""
	I0729 13:41:42.651865  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.651873  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:42.651883  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:42.651899  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:42.707149  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:42.707190  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:42.720990  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:42.721043  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:42.789818  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:42.789849  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:42.789867  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:42.871880  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:42.871934  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.416172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:45.428923  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:45.428994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:45.466667  301425 cri.go:89] found id: ""
	I0729 13:41:45.466699  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.466710  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:45.466717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:45.466783  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:45.501779  301425 cri.go:89] found id: ""
	I0729 13:41:45.501813  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.501825  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:45.501832  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:45.501896  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:45.537507  301425 cri.go:89] found id: ""
	I0729 13:41:45.537537  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.537547  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:45.537554  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:45.537619  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:45.575430  301425 cri.go:89] found id: ""
	I0729 13:41:45.575460  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.575467  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:45.575474  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:45.575523  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:45.613009  301425 cri.go:89] found id: ""
	I0729 13:41:45.613038  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.613047  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:45.613053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:45.613103  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:45.650734  301425 cri.go:89] found id: ""
	I0729 13:41:45.650767  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.650778  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:45.650786  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:45.650853  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:45.684301  301425 cri.go:89] found id: ""
	I0729 13:41:45.684332  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.684341  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:45.684349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:45.684416  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:45.719861  301425 cri.go:89] found id: ""
	I0729 13:41:45.719901  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.719911  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:45.719921  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:45.719936  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:45.800422  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:45.800464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.842460  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:45.842493  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:45.897388  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:45.897430  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:45.911554  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:45.911587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:45.984435  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.485014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:48.498038  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:48.498110  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:48.534248  301425 cri.go:89] found id: ""
	I0729 13:41:48.534280  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.534291  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:48.534299  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:48.534362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:48.572411  301425 cri.go:89] found id: ""
	I0729 13:41:48.572445  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.572457  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:48.572465  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:48.572524  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:48.612345  301425 cri.go:89] found id: ""
	I0729 13:41:48.612373  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.612381  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:48.612387  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:48.612450  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:48.650334  301425 cri.go:89] found id: ""
	I0729 13:41:48.650385  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.650395  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:48.650401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:48.650466  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:48.687460  301425 cri.go:89] found id: ""
	I0729 13:41:48.687490  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.687501  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:48.687508  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:48.687572  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:48.735028  301425 cri.go:89] found id: ""
	I0729 13:41:48.735064  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.735077  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:48.735085  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:48.735142  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:48.771175  301425 cri.go:89] found id: ""
	I0729 13:41:48.771209  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.771220  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:48.771228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:48.771300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:48.808267  301425 cri.go:89] found id: ""
	I0729 13:41:48.808295  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.808304  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:48.808314  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:48.808328  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:48.850520  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:48.850557  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:48.902563  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:48.902612  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:48.919082  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:48.919114  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:48.999185  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.999213  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:48.999241  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:51.579922  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:51.593149  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:51.593213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:51.626302  301425 cri.go:89] found id: ""
	I0729 13:41:51.626330  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.626338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:51.626344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:51.626393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:51.659551  301425 cri.go:89] found id: ""
	I0729 13:41:51.659578  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.659586  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:51.659592  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:51.659642  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:51.696842  301425 cri.go:89] found id: ""
	I0729 13:41:51.696868  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.696876  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:51.696882  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:51.696937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:51.737209  301425 cri.go:89] found id: ""
	I0729 13:41:51.737237  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.737246  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:51.737253  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:51.737317  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:51.772782  301425 cri.go:89] found id: ""
	I0729 13:41:51.772829  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.772842  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:51.772850  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:51.772921  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:51.806649  301425 cri.go:89] found id: ""
	I0729 13:41:51.806679  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.806690  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:51.806698  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:51.806771  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:51.848950  301425 cri.go:89] found id: ""
	I0729 13:41:51.848978  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.848989  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:51.848997  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:51.849065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:51.884875  301425 cri.go:89] found id: ""
	I0729 13:41:51.884902  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.884910  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:51.884920  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:51.884932  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:51.964282  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:51.964322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:52.004218  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:52.004251  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:52.056230  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:52.056266  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.069591  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:52.069622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:52.142552  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:54.643154  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:54.657199  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:54.657259  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:54.694124  301425 cri.go:89] found id: ""
	I0729 13:41:54.694152  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.694159  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:54.694165  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:54.694221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:54.732072  301425 cri.go:89] found id: ""
	I0729 13:41:54.732109  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.732119  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:54.732127  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:54.732194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:54.768257  301425 cri.go:89] found id: ""
	I0729 13:41:54.768294  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.768306  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:54.768314  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:54.768383  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:54.807596  301425 cri.go:89] found id: ""
	I0729 13:41:54.807631  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.807643  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:54.807651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:54.807716  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:54.845107  301425 cri.go:89] found id: ""
	I0729 13:41:54.845134  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.845142  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:54.845148  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:54.845197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:54.880627  301425 cri.go:89] found id: ""
	I0729 13:41:54.880655  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.880667  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:54.880675  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:54.880750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:54.918122  301425 cri.go:89] found id: ""
	I0729 13:41:54.918151  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.918159  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:54.918165  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:54.918219  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:54.956943  301425 cri.go:89] found id: ""
	I0729 13:41:54.956986  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.956999  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:54.957022  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:54.957036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:55.032512  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:55.032547  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:55.032564  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:55.116653  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:55.116699  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:55.177030  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:55.177059  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:55.238789  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:55.238831  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:57.753504  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:57.766354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:57.766436  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:57.802691  301425 cri.go:89] found id: ""
	I0729 13:41:57.802728  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.802740  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:57.802746  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:57.802807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:57.839800  301425 cri.go:89] found id: ""
	I0729 13:41:57.839823  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.839830  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:57.839846  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:57.839902  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:57.881592  301425 cri.go:89] found id: ""
	I0729 13:41:57.881617  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.881625  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:57.881631  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:57.881681  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.916245  301425 cri.go:89] found id: ""
	I0729 13:41:57.916273  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.916282  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:57.916290  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:57.916346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:57.952224  301425 cri.go:89] found id: ""
	I0729 13:41:57.952261  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.952272  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:57.952280  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:57.952340  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:57.985508  301425 cri.go:89] found id: ""
	I0729 13:41:57.985537  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.985548  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:57.985557  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:57.985624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:58.022354  301425 cri.go:89] found id: ""
	I0729 13:41:58.022382  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.022391  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:58.022397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:58.022462  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:58.055865  301425 cri.go:89] found id: ""
	I0729 13:41:58.055891  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.055900  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:58.055914  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:58.055931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:58.069143  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:58.069177  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:58.143137  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:58.143164  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:58.143183  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:58.224631  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:58.224672  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:58.266437  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:58.266470  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:00.819300  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:00.834195  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:00.834258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:00.869660  301425 cri.go:89] found id: ""
	I0729 13:42:00.869697  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.869709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:00.869717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:00.869777  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:00.915601  301425 cri.go:89] found id: ""
	I0729 13:42:00.915630  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.915638  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:00.915644  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:00.915694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:00.956981  301425 cri.go:89] found id: ""
	I0729 13:42:00.957020  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.957028  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:00.957034  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:00.957094  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:00.995761  301425 cri.go:89] found id: ""
	I0729 13:42:00.995793  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.995801  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:00.995817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:00.995869  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:01.047668  301425 cri.go:89] found id: ""
	I0729 13:42:01.047699  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.047707  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:01.047713  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:01.047787  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:01.085178  301425 cri.go:89] found id: ""
	I0729 13:42:01.085209  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.085217  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:01.085224  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:01.085278  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:01.125282  301425 cri.go:89] found id: ""
	I0729 13:42:01.125310  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.125320  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:01.125329  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:01.125396  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:01.165972  301425 cri.go:89] found id: ""
	I0729 13:42:01.166005  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.166021  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:01.166033  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:01.166049  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:01.236500  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:01.236523  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:01.236540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:01.320918  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:01.320959  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:01.366975  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:01.367015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:01.420347  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:01.420389  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:03.936048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:03.949603  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:03.949679  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:03.987529  301425 cri.go:89] found id: ""
	I0729 13:42:03.987557  301425 logs.go:276] 0 containers: []
	W0729 13:42:03.987567  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:03.987574  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:03.987639  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:04.027325  301425 cri.go:89] found id: ""
	I0729 13:42:04.027355  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.027365  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:04.027372  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:04.027437  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:04.063019  301425 cri.go:89] found id: ""
	I0729 13:42:04.063050  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.063059  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:04.063065  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:04.063117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:04.101106  301425 cri.go:89] found id: ""
	I0729 13:42:04.101135  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.101146  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:04.101153  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:04.101242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:04.137186  301425 cri.go:89] found id: ""
	I0729 13:42:04.137219  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.137230  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:04.137238  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:04.137302  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:04.175732  301425 cri.go:89] found id: ""
	I0729 13:42:04.175761  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.175770  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:04.175776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:04.175826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:04.213265  301425 cri.go:89] found id: ""
	I0729 13:42:04.213296  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.213307  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:04.213315  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:04.213381  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:04.248581  301425 cri.go:89] found id: ""
	I0729 13:42:04.248609  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.248617  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:04.248627  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:04.248643  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:04.303277  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:04.303400  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:04.317518  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:04.317547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:04.385209  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:04.385229  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:04.385242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:04.470629  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:04.470680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:07.012455  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:07.028535  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:07.028621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:07.063453  301425 cri.go:89] found id: ""
	I0729 13:42:07.063496  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.063505  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:07.063511  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:07.063582  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:07.098243  301425 cri.go:89] found id: ""
	I0729 13:42:07.098274  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.098284  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:07.098291  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:07.098357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:07.138122  301425 cri.go:89] found id: ""
	I0729 13:42:07.138149  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.138157  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:07.138162  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:07.138213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:07.176772  301425 cri.go:89] found id: ""
	I0729 13:42:07.176814  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.176826  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:07.176835  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:07.176894  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:07.214867  301425 cri.go:89] found id: ""
	I0729 13:42:07.214898  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.214914  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:07.214920  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:07.214979  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:07.253443  301425 cri.go:89] found id: ""
	I0729 13:42:07.253471  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.253481  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:07.253490  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:07.253550  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:07.287284  301425 cri.go:89] found id: ""
	I0729 13:42:07.287326  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.287338  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:07.287349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:07.287411  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:07.330550  301425 cri.go:89] found id: ""
	I0729 13:42:07.330577  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.330588  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:07.330599  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:07.330620  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:07.384226  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:07.384268  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:07.398790  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:07.398817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:07.462868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:07.462893  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:07.462914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:07.538665  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:07.538706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.078452  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:10.091962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:10.092027  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:10.127401  301425 cri.go:89] found id: ""
	I0729 13:42:10.127434  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.127445  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:10.127454  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:10.127531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:10.161088  301425 cri.go:89] found id: ""
	I0729 13:42:10.161117  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.161127  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:10.161134  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:10.161187  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:10.199721  301425 cri.go:89] found id: ""
	I0729 13:42:10.199751  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.199763  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:10.199769  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:10.199821  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:10.237067  301425 cri.go:89] found id: ""
	I0729 13:42:10.237106  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.237120  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:10.237127  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:10.237191  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:10.275863  301425 cri.go:89] found id: ""
	I0729 13:42:10.275894  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.275909  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:10.275918  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:10.275981  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:10.313234  301425 cri.go:89] found id: ""
	I0729 13:42:10.313262  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.313270  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:10.313276  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:10.313334  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:10.353530  301425 cri.go:89] found id: ""
	I0729 13:42:10.353558  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.353569  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:10.353576  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:10.353644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:10.389488  301425 cri.go:89] found id: ""
	I0729 13:42:10.389516  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.389527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:10.389539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:10.389562  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.428705  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:10.428740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:10.484413  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:10.484456  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:10.499203  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:10.499248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:10.570868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:10.570894  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:10.570907  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:13.151788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:13.165297  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:13.165367  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:13.203752  301425 cri.go:89] found id: ""
	I0729 13:42:13.203786  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.203798  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:13.203805  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:13.203874  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:13.240454  301425 cri.go:89] found id: ""
	I0729 13:42:13.240491  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.240499  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:13.240504  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:13.240556  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:13.276508  301425 cri.go:89] found id: ""
	I0729 13:42:13.276536  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.276545  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:13.276553  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:13.276617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:13.311252  301425 cri.go:89] found id: ""
	I0729 13:42:13.311280  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.311291  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:13.311299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:13.311353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:13.351777  301425 cri.go:89] found id: ""
	I0729 13:42:13.351808  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.351817  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:13.351823  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:13.351881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:13.389020  301425 cri.go:89] found id: ""
	I0729 13:42:13.389049  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.389058  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:13.389064  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:13.389126  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:13.424353  301425 cri.go:89] found id: ""
	I0729 13:42:13.424387  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.424395  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:13.424401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:13.424451  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:13.460755  301425 cri.go:89] found id: ""
	I0729 13:42:13.460788  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.460817  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:13.460830  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:13.460850  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:13.500201  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:13.500234  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:13.553319  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:13.553357  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:13.567496  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:13.567529  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:13.644662  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:13.644686  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:13.644700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.226602  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:16.242934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:16.243005  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:16.284033  301425 cri.go:89] found id: ""
	I0729 13:42:16.284064  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.284075  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:16.284083  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:16.284152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:16.328362  301425 cri.go:89] found id: ""
	I0729 13:42:16.328388  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.328396  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:16.328402  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:16.328464  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:16.372664  301425 cri.go:89] found id: ""
	I0729 13:42:16.372701  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.372712  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:16.372727  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:16.372818  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:16.416085  301425 cri.go:89] found id: ""
	I0729 13:42:16.416119  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.416130  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:16.416138  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:16.416194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:16.457786  301425 cri.go:89] found id: ""
	I0729 13:42:16.457819  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.457830  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:16.457838  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:16.457903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:16.498929  301425 cri.go:89] found id: ""
	I0729 13:42:16.498962  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.498971  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:16.498979  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:16.499043  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:16.546159  301425 cri.go:89] found id: ""
	I0729 13:42:16.546187  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.546199  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:16.546207  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:16.546270  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:16.585010  301425 cri.go:89] found id: ""
	I0729 13:42:16.585041  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.585052  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:16.585065  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:16.585081  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:16.639033  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:16.639079  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:16.656209  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:16.656242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:16.734835  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:16.734863  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:16.734940  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.818756  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:16.818798  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.370796  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:19.384267  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:19.384354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:19.425595  301425 cri.go:89] found id: ""
	I0729 13:42:19.425629  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.425641  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:19.425650  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:19.425715  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:19.461470  301425 cri.go:89] found id: ""
	I0729 13:42:19.461506  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.461517  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:19.461524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:19.461592  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:19.508232  301425 cri.go:89] found id: ""
	I0729 13:42:19.508265  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.508275  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:19.508283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:19.508360  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:19.546226  301425 cri.go:89] found id: ""
	I0729 13:42:19.546259  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.546275  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:19.546283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:19.546354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:19.581125  301425 cri.go:89] found id: ""
	I0729 13:42:19.581156  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.581167  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:19.581176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:19.581242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:19.619680  301425 cri.go:89] found id: ""
	I0729 13:42:19.619719  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.619728  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:19.619736  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:19.619800  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:19.657096  301425 cri.go:89] found id: ""
	I0729 13:42:19.657126  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.657136  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:19.657142  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:19.657203  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:19.697247  301425 cri.go:89] found id: ""
	I0729 13:42:19.697277  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.697286  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:19.697297  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:19.697312  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:19.714900  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:19.714935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:19.794118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:19.794145  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:19.794161  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:19.907077  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:19.907122  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.949841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:19.949871  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.515296  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:22.529187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:22.529286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:22.573033  301425 cri.go:89] found id: ""
	I0729 13:42:22.573070  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.573082  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:22.573091  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:22.573152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:22.608443  301425 cri.go:89] found id: ""
	I0729 13:42:22.608476  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.608489  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:22.608496  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:22.608566  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:22.641672  301425 cri.go:89] found id: ""
	I0729 13:42:22.641704  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.641716  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:22.641724  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:22.641781  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:22.673902  301425 cri.go:89] found id: ""
	I0729 13:42:22.673934  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.673944  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:22.673952  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:22.674012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:22.715131  301425 cri.go:89] found id: ""
	I0729 13:42:22.715165  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.715179  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:22.715187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:22.715251  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:22.748807  301425 cri.go:89] found id: ""
	I0729 13:42:22.748838  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.748848  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:22.748856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:22.748924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:22.781972  301425 cri.go:89] found id: ""
	I0729 13:42:22.782002  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.782012  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:22.782021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:22.782088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:22.815791  301425 cri.go:89] found id: ""
	I0729 13:42:22.815823  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.815834  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:22.815848  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.815864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.873595  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:22.873631  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:22.888081  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:22.888123  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:22.959873  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:22.959899  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.959912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:23.040996  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:23.041035  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:25.585159  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:25.604154  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.604240  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.645428  301425 cri.go:89] found id: ""
	I0729 13:42:25.645459  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.645466  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:25.645474  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.645534  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.682758  301425 cri.go:89] found id: ""
	I0729 13:42:25.682785  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.682793  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:25.682799  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.682864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.724297  301425 cri.go:89] found id: ""
	I0729 13:42:25.724330  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.724341  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:25.724349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.724401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.761124  301425 cri.go:89] found id: ""
	I0729 13:42:25.761157  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.761168  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:25.761177  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.761229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.802698  301425 cri.go:89] found id: ""
	I0729 13:42:25.802728  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.802741  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:25.802750  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.802804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.840472  301425 cri.go:89] found id: ""
	I0729 13:42:25.840499  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.840509  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:25.840516  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.840586  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.875217  301425 cri.go:89] found id: ""
	I0729 13:42:25.875255  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.875267  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.875273  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:25.875345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:25.919895  301425 cri.go:89] found id: ""
	I0729 13:42:25.919937  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.919948  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:25.919963  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.919988  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:25.981964  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:25.982005  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:25.997546  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:25.997576  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:26.075879  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:26.075901  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.075917  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.158552  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.158593  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:28.704328  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:28.718946  301425 kubeadm.go:597] duration metric: took 4m3.546660825s to restartPrimaryControlPlane
	W0729 13:42:28.719041  301425 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:28.719086  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:29.251866  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.267009  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:29.277498  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:29.287980  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:29.288003  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:29.288054  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:42:29.297830  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:29.297890  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:29.308263  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:42:29.318332  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:29.318388  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:29.328684  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.339841  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:29.339894  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.351304  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:42:29.363901  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:29.363960  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:29.377255  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:29.453113  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:42:29.453212  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:29.609835  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:29.609970  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:29.610106  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:29.812529  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:29.814455  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:29.814551  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:29.814633  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:29.814727  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:29.814799  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:29.814915  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:29.814979  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:29.815695  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:29.816098  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:29.816602  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:29.817114  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:29.817184  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:29.817266  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:30.122967  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:30.287162  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:30.336346  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:30.516317  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:30.532829  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:30.533732  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:30.533809  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:30.672345  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:30.674334  301425 out.go:204]   - Booting up control plane ...
	I0729 13:42:30.674492  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:30.681661  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:30.681784  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:30.683350  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:30.687290  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:43:10.689091  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:43:10.689558  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:10.689837  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:15.690809  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:15.691011  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:25.691962  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:25.692244  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:45.693269  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:45.693473  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696107  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:44:25.696300  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696307  301425 kubeadm.go:310] 
	I0729 13:44:25.696341  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:44:25.696400  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:44:25.696419  301425 kubeadm.go:310] 
	I0729 13:44:25.696463  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:44:25.696510  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:44:25.696653  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:44:25.696674  301425 kubeadm.go:310] 
	I0729 13:44:25.696818  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:44:25.696868  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:44:25.696921  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:44:25.696930  301425 kubeadm.go:310] 
	I0729 13:44:25.697076  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:44:25.697192  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:44:25.697206  301425 kubeadm.go:310] 
	I0729 13:44:25.697349  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:44:25.697459  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:44:25.697568  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:44:25.697669  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:44:25.697680  301425 kubeadm.go:310] 
	I0729 13:44:25.698359  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:44:25.698490  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:44:25.698596  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 13:44:25.698771  301425 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 13:44:25.698848  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:44:26.160539  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:44:26.175482  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:44:26.185562  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:44:26.185593  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:44:26.185657  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:44:26.195781  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:44:26.195865  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:44:26.207404  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:44:26.217068  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:44:26.217188  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:44:26.226075  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.234622  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:44:26.234684  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.243756  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:44:26.252630  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:44:26.252695  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:44:26.262846  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:44:26.340215  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:44:26.340318  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:44:26.496049  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:44:26.496199  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:44:26.496327  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:44:26.678135  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:44:26.680089  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:44:26.680173  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:44:26.680257  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:44:26.680378  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:44:26.680470  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:44:26.680570  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:44:26.680653  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:44:26.680751  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:44:26.681022  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:44:26.681519  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:44:26.681876  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:44:26.681994  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:44:26.682083  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:44:26.762680  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:44:26.922517  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:44:26.973731  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:44:27.193064  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:44:27.216477  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:44:27.219036  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:44:27.219293  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:44:27.386424  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:44:27.388194  301425 out.go:204]   - Booting up control plane ...
	I0729 13:44:27.388340  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:44:27.390345  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:44:27.391455  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:44:27.392303  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:44:27.394301  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:45:07.396989  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:45:07.397449  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:07.397719  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:12.397982  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:12.398297  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:22.398751  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:22.399010  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:42.399462  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:42.399675  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398413  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:46:22.398684  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398700  301425 kubeadm.go:310] 
	I0729 13:46:22.398763  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:46:22.398844  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:46:22.398886  301425 kubeadm.go:310] 
	I0729 13:46:22.398948  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:46:22.399002  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:46:22.399132  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:46:22.399145  301425 kubeadm.go:310] 
	I0729 13:46:22.399287  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:46:22.399346  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:46:22.399392  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:46:22.399404  301425 kubeadm.go:310] 
	I0729 13:46:22.399530  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:46:22.399610  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:46:22.399617  301425 kubeadm.go:310] 
	I0729 13:46:22.399735  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:46:22.399844  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:46:22.399943  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:46:22.400021  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:46:22.400035  301425 kubeadm.go:310] 
	I0729 13:46:22.400291  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:46:22.400370  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:46:22.400440  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 13:46:22.400520  301425 kubeadm.go:394] duration metric: took 7m57.286753846s to StartCluster
	I0729 13:46:22.400612  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:46:22.400692  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:46:22.446188  301425 cri.go:89] found id: ""
	I0729 13:46:22.446216  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.446225  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:46:22.446232  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:46:22.446289  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:46:22.484089  301425 cri.go:89] found id: ""
	I0729 13:46:22.484118  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.484128  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:46:22.484135  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:46:22.484197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:46:22.526817  301425 cri.go:89] found id: ""
	I0729 13:46:22.526846  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.526854  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:46:22.526860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:46:22.526912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:46:22.564787  301425 cri.go:89] found id: ""
	I0729 13:46:22.564834  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.564846  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:46:22.564854  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:46:22.564920  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:46:22.601843  301425 cri.go:89] found id: ""
	I0729 13:46:22.601881  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.601892  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:46:22.601900  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:46:22.601980  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:46:22.637420  301425 cri.go:89] found id: ""
	I0729 13:46:22.637448  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.637455  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:46:22.637462  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:46:22.637519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:46:22.672427  301425 cri.go:89] found id: ""
	I0729 13:46:22.672465  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.672476  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:46:22.672485  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:46:22.672549  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:46:22.708256  301425 cri.go:89] found id: ""
	I0729 13:46:22.708285  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.708294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:46:22.708306  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:46:22.708323  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:46:22.819287  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:46:22.819327  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:46:22.859298  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:46:22.859339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:46:22.914290  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:46:22.914342  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:46:22.936919  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:46:22.936951  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:46:23.035889  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0729 13:46:23.035939  301425 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 13:46:23.035991  301425 out.go:239] * 
	* 
	W0729 13:46:23.036103  301425 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.036137  301425 out.go:239] * 
	* 
	W0729 13:46:23.037370  301425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:46:23.040573  301425 out.go:177] 
	W0729 13:46:23.042130  301425 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.042173  301425 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 13:46:23.042193  301425 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 13:46:23.043539  301425 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-924039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 2 (225.54561ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-924039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-924039 logs -n 25: (1.604860295s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-507612 sudo cat                              | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo find                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo crio                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-507612                                       | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-312895 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | disable-driver-mounts-312895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:30 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-135920            | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-566777             | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-972693  | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-135920                 | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-566777                  | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-924039        | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-566777 --memory=2200                     | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-972693       | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC | 29 Jul 24 13:43 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-924039             | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:34:10
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:34:10.969228  301425 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:34:10.969348  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969356  301425 out.go:304] Setting ErrFile to fd 2...
	I0729 13:34:10.969360  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969506  301425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:34:10.970007  301425 out.go:298] Setting JSON to false
	I0729 13:34:10.970908  301425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11794,"bootTime":1722248257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:34:10.970971  301425 start.go:139] virtualization: kvm guest
	I0729 13:34:10.973245  301425 out.go:177] * [old-k8s-version-924039] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:34:10.974804  301425 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:34:10.974803  301425 notify.go:220] Checking for updates...
	I0729 13:34:10.977011  301425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:34:10.978270  301425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:34:10.979473  301425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:34:10.980743  301425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:34:10.981923  301425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:34:10.983514  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:34:10.983962  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:10.984049  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:10.998985  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0729 13:34:10.999407  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:10.999928  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:10.999951  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.000306  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.000497  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.002455  301425 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 13:34:11.003702  301425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:34:11.003997  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:11.004037  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:11.018707  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I0729 13:34:11.019177  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:11.019653  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:11.019676  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.019968  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.020126  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.055819  301425 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:34:11.057085  301425 start.go:297] selected driver: kvm2
	I0729 13:34:11.057104  301425 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.057242  301425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:34:11.057967  301425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.058029  301425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:34:11.073706  301425 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:34:11.074089  301425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:34:11.074169  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:34:11.074188  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:34:11.074240  301425 start.go:340] cluster config:
	{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.074366  301425 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.076296  301425 out.go:177] * Starting "old-k8s-version-924039" primary control-plane node in "old-k8s-version-924039" cluster
	I0729 13:34:09.149068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:11.077828  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:34:11.077869  301425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:34:11.077879  301425 cache.go:56] Caching tarball of preloaded images
	I0729 13:34:11.077959  301425 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:34:11.077970  301425 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 13:34:11.078069  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:34:11.078241  301425 start.go:360] acquireMachinesLock for old-k8s-version-924039: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:34:15.229067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:18.301058  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:24.381104  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:27.453064  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:33.533067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:36.605120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:42.685075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:45.757111  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:51.837033  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:54.909068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:00.989073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:04.061125  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:10.141082  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:13.213123  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:19.293109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:22.365061  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:28.445075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:31.517094  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:37.597080  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:40.669073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:46.749070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:49.821083  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:55.901013  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:58.973149  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:05.053098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:08.125109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:14.205093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:17.277093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:23.357105  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:26.429122  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:32.509070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:35.581107  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:41.661120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:44.733129  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:50.813085  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:53.885117  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:59.965073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:03.037079  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:09.117098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:12.189049  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:15.193505  300746 start.go:364] duration metric: took 4m36.683808785s to acquireMachinesLock for "no-preload-566777"
	I0729 13:37:15.193569  300746 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:15.193577  300746 fix.go:54] fixHost starting: 
	I0729 13:37:15.193937  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:15.193976  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:15.209623  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0729 13:37:15.210158  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:15.210625  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:37:15.210646  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:15.211001  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:15.211265  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:15.211468  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:37:15.213144  300746 fix.go:112] recreateIfNeeded on no-preload-566777: state=Stopped err=<nil>
	I0729 13:37:15.213185  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	W0729 13:37:15.213349  300746 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:15.215474  300746 out.go:177] * Restarting existing kvm2 VM for "no-preload-566777" ...
	I0729 13:37:15.190804  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:15.190850  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191224  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:37:15.191257  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191494  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:37:15.193354  300705 machine.go:97] duration metric: took 4m37.425774293s to provisionDockerMachine
	I0729 13:37:15.193407  300705 fix.go:56] duration metric: took 4m37.447841932s for fixHost
	I0729 13:37:15.193419  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 4m37.447869212s
	W0729 13:37:15.193447  300705 start.go:714] error starting host: provision: host is not running
	W0729 13:37:15.193569  300705 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 13:37:15.193581  300705 start.go:729] Will try again in 5 seconds ...
	I0729 13:37:15.216957  300746 main.go:141] libmachine: (no-preload-566777) Calling .Start
	I0729 13:37:15.217120  300746 main.go:141] libmachine: (no-preload-566777) Ensuring networks are active...
	I0729 13:37:15.217761  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network default is active
	I0729 13:37:15.218067  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network mk-no-preload-566777 is active
	I0729 13:37:15.218451  300746 main.go:141] libmachine: (no-preload-566777) Getting domain xml...
	I0729 13:37:15.219134  300746 main.go:141] libmachine: (no-preload-566777) Creating domain...
	I0729 13:37:16.412301  300746 main.go:141] libmachine: (no-preload-566777) Waiting to get IP...
	I0729 13:37:16.413162  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.413576  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.413670  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.413557  302040 retry.go:31] will retry after 233.512145ms: waiting for machine to come up
	I0729 13:37:16.649335  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.649921  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.649945  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.649885  302040 retry.go:31] will retry after 328.846738ms: waiting for machine to come up
	I0729 13:37:16.980566  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.980976  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.981022  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.980926  302040 retry.go:31] will retry after 329.69915ms: waiting for machine to come up
	I0729 13:37:17.312547  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.312948  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.312977  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.312906  302040 retry.go:31] will retry after 418.810733ms: waiting for machine to come up
	I0729 13:37:17.733615  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.734042  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.734065  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.734009  302040 retry.go:31] will retry after 694.191211ms: waiting for machine to come up
	I0729 13:37:20.196079  300705 start.go:360] acquireMachinesLock for embed-certs-135920: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:37:18.429670  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:18.430024  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:18.430055  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:18.429973  302040 retry.go:31] will retry after 857.66396ms: waiting for machine to come up
	I0729 13:37:19.289078  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:19.289491  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:19.289521  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:19.289458  302040 retry.go:31] will retry after 994.340261ms: waiting for machine to come up
	I0729 13:37:20.285875  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:20.286308  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:20.286340  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:20.286263  302040 retry.go:31] will retry after 1.052380852s: waiting for machine to come up
	I0729 13:37:21.340435  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:21.340775  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:21.340821  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:21.340743  302040 retry.go:31] will retry after 1.429700498s: waiting for machine to come up
	I0729 13:37:22.772362  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:22.772754  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:22.772782  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:22.772700  302040 retry.go:31] will retry after 1.702185495s: waiting for machine to come up
	I0729 13:37:24.477636  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:24.478074  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:24.478106  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:24.478003  302040 retry.go:31] will retry after 2.649912402s: waiting for machine to come up
	I0729 13:37:27.129797  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:27.130212  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:27.130243  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:27.130159  302040 retry.go:31] will retry after 3.079887428s: waiting for machine to come up
	I0729 13:37:30.213431  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:30.213918  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:30.213958  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:30.213875  302040 retry.go:31] will retry after 3.08003223s: waiting for machine to come up
	I0729 13:37:33.297139  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.297604  300746 main.go:141] libmachine: (no-preload-566777) Found IP for machine: 192.168.61.84
	I0729 13:37:33.297627  300746 main.go:141] libmachine: (no-preload-566777) Reserving static IP address...
	I0729 13:37:33.297639  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has current primary IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.298106  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.298146  300746 main.go:141] libmachine: (no-preload-566777) Reserved static IP address: 192.168.61.84
	I0729 13:37:33.298164  300746 main.go:141] libmachine: (no-preload-566777) DBG | skip adding static IP to network mk-no-preload-566777 - found existing host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"}
	I0729 13:37:33.298178  300746 main.go:141] libmachine: (no-preload-566777) DBG | Getting to WaitForSSH function...
	I0729 13:37:33.298194  300746 main.go:141] libmachine: (no-preload-566777) Waiting for SSH to be available...
	I0729 13:37:33.300310  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300618  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.300653  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300731  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH client type: external
	I0729 13:37:33.300773  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa (-rw-------)
	I0729 13:37:33.300826  300746 main.go:141] libmachine: (no-preload-566777) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:33.300957  300746 main.go:141] libmachine: (no-preload-566777) DBG | About to run SSH command:
	I0729 13:37:33.300985  300746 main.go:141] libmachine: (no-preload-566777) DBG | exit 0
	I0729 13:37:34.861481  301044 start.go:364] duration metric: took 4m23.064160625s to acquireMachinesLock for "default-k8s-diff-port-972693"
	I0729 13:37:34.861564  301044 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:34.861576  301044 fix.go:54] fixHost starting: 
	I0729 13:37:34.862021  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:34.862055  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:34.879106  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0729 13:37:34.879506  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:34.880050  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:37:34.880077  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:34.880423  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:34.880637  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:34.880838  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:37:34.882251  301044 fix.go:112] recreateIfNeeded on default-k8s-diff-port-972693: state=Stopped err=<nil>
	I0729 13:37:34.882284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	W0729 13:37:34.882465  301044 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:34.884611  301044 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-972693" ...
	I0729 13:37:33.420745  300746 main.go:141] libmachine: (no-preload-566777) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:33.421178  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetConfigRaw
	I0729 13:37:33.421861  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.424343  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.424680  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.424710  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.425061  300746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/config.json ...
	I0729 13:37:33.425244  300746 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:33.425262  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:33.425513  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.427708  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.427961  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.427989  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.428171  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.428354  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428528  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428672  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.428933  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.429139  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.429150  300746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:33.525027  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:33.525065  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525306  300746 buildroot.go:166] provisioning hostname "no-preload-566777"
	I0729 13:37:33.525340  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525551  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.528124  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528491  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.528529  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528677  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.528865  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529025  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529144  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.529286  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.529453  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.529465  300746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-566777 && echo "no-preload-566777" | sudo tee /etc/hostname
	I0729 13:37:33.638867  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-566777
	
	I0729 13:37:33.638902  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.641406  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641730  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.641762  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641908  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.642112  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642285  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642414  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.642555  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.642727  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.642743  300746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-566777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-566777/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-566777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:33.749760  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:33.749789  300746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:33.749812  300746 buildroot.go:174] setting up certificates
	I0729 13:37:33.749821  300746 provision.go:84] configureAuth start
	I0729 13:37:33.749831  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.750114  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.752924  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753241  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.753264  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753477  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.755385  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755681  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.755701  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755840  300746 provision.go:143] copyHostCerts
	I0729 13:37:33.755904  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:33.755926  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:33.756019  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:33.756156  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:33.756169  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:33.756206  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:33.756276  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:33.756286  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:33.756317  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:33.756380  300746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.no-preload-566777 san=[127.0.0.1 192.168.61.84 localhost minikube no-preload-566777]
	I0729 13:37:34.226953  300746 provision.go:177] copyRemoteCerts
	I0729 13:37:34.227033  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:34.227066  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.229542  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229816  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.229853  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.230177  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.230314  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.230452  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.310803  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:37:34.334545  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:37:34.357908  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:34.381163  300746 provision.go:87] duration metric: took 631.325967ms to configureAuth
	I0729 13:37:34.381200  300746 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:34.381441  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:37:34.381535  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.383985  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384286  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.384312  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384473  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.384681  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384862  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384995  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.385176  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.385393  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.385414  300746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:34.640587  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:34.640615  300746 machine.go:97] duration metric: took 1.215357318s to provisionDockerMachine
	I0729 13:37:34.640628  300746 start.go:293] postStartSetup for "no-preload-566777" (driver="kvm2")
	I0729 13:37:34.640645  300746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:34.640683  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.641067  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:34.641104  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.643711  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644066  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.644097  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644215  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.644398  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.644555  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.644677  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.723215  300746 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:34.727393  300746 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:34.727425  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:34.727507  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:34.727614  300746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:34.727770  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:34.736666  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:34.759678  300746 start.go:296] duration metric: took 119.034973ms for postStartSetup
	I0729 13:37:34.759716  300746 fix.go:56] duration metric: took 19.566140877s for fixHost
	I0729 13:37:34.759748  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.762103  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762468  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.762491  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762645  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.762843  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763008  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763111  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.763229  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.763392  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.763403  300746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:34.861306  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260254.835831305
	
	I0729 13:37:34.861333  300746 fix.go:216] guest clock: 1722260254.835831305
	I0729 13:37:34.861341  300746 fix.go:229] Guest: 2024-07-29 13:37:34.835831305 +0000 UTC Remote: 2024-07-29 13:37:34.759720831 +0000 UTC m=+296.387252495 (delta=76.110474ms)
	I0729 13:37:34.861376  300746 fix.go:200] guest clock delta is within tolerance: 76.110474ms
	I0729 13:37:34.861384  300746 start.go:83] releasing machines lock for "no-preload-566777", held for 19.66783585s
	I0729 13:37:34.861413  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.861708  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:34.864181  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864534  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.864567  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864757  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865296  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865467  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865546  300746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:34.865600  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.865726  300746 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:34.865753  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.868333  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868522  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868772  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868810  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868839  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868859  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868913  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869060  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869152  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869209  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869300  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869349  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869417  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.869551  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.970978  300746 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:34.978226  300746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:35.128653  300746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:35.134619  300746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:35.134688  300746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:35.150674  300746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:35.150697  300746 start.go:495] detecting cgroup driver to use...
	I0729 13:37:35.150762  300746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:35.166545  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:35.178859  300746 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:35.178913  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:35.197133  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:35.214430  300746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:35.337707  300746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:35.467057  300746 docker.go:233] disabling docker service ...
	I0729 13:37:35.467134  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:35.480960  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:35.493850  300746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:35.629455  300746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:35.741534  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:35.754886  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:35.773243  300746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 13:37:35.773323  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.783589  300746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:35.783673  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.794150  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.805389  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.816636  300746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:35.828027  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.838467  300746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.856470  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.866773  300746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:35.876110  300746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:35.876175  300746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:35.889768  300746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:35.909971  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:36.046023  300746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:36.192169  300746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:36.192238  300746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:36.197281  300746 start.go:563] Will wait 60s for crictl version
	I0729 13:37:36.197365  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.201359  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:36.248317  300746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:36.248420  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.276247  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.306549  300746 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 13:37:34.885944  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Start
	I0729 13:37:34.886114  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring networks are active...
	I0729 13:37:34.886856  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network default is active
	I0729 13:37:34.887211  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network mk-default-k8s-diff-port-972693 is active
	I0729 13:37:34.887684  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Getting domain xml...
	I0729 13:37:34.888427  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Creating domain...
	I0729 13:37:36.147265  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting to get IP...
	I0729 13:37:36.148095  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148547  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148616  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.148516  302181 retry.go:31] will retry after 191.117257ms: waiting for machine to come up
	I0729 13:37:36.340984  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341507  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.341444  302181 retry.go:31] will retry after 285.557329ms: waiting for machine to come up
	I0729 13:37:36.629066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629670  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.629621  302181 retry.go:31] will retry after 397.294163ms: waiting for machine to come up
	I0729 13:37:36.307930  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:36.311057  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311389  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:36.311417  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311699  300746 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:36.316257  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:36.330109  300746 kubeadm.go:883] updating cluster {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:36.330268  300746 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 13:37:36.330320  300746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:36.367218  300746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 13:37:36.367250  300746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:37:36.367327  300746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.367333  300746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.367394  300746 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 13:37:36.367404  300746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.367432  300746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.367353  300746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.367412  300746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.367743  300746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.369020  300746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.369125  300746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.369150  300746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.369203  300746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.369015  300746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.369484  300746 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 13:37:36.369609  300746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.369763  300746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.560256  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.600945  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.604476  300746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 13:37:36.604539  300746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.604592  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.606566  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 13:37:36.649109  300746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 13:37:36.649210  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.649212  300746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.649328  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.696863  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.698623  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.713816  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.727059  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.764110  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.764204  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.764208  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.784479  300746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 13:37:36.784542  300746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.784558  300746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 13:37:36.784597  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.784598  300746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.784694  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.813445  300746 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 13:37:36.813491  300746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.813544  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.825275  300746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 13:37:36.825290  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 13:37:36.825392  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825463  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825327  300746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.825515  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.852786  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.852866  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:36.852822  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.852843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.852984  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:37.587824  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:37.028009  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028349  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028378  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.028295  302181 retry.go:31] will retry after 507.597159ms: waiting for machine to come up
	I0729 13:37:37.538138  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538550  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538581  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.538507  302181 retry.go:31] will retry after 508.855087ms: waiting for machine to come up
	I0729 13:37:38.049628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050241  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050277  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.050198  302181 retry.go:31] will retry after 889.089993ms: waiting for machine to come up
	I0729 13:37:38.940541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941096  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.941009  302181 retry.go:31] will retry after 891.889885ms: waiting for machine to come up
	I0729 13:37:39.834956  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835395  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:39.835341  302181 retry.go:31] will retry after 1.030799215s: waiting for machine to come up
	I0729 13:37:40.867814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868336  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868367  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:40.868283  302181 retry.go:31] will retry after 1.40369357s: waiting for machine to come up
	I0729 13:37:38.870850  300746 ssh_runner.go:235] Completed: which crictl: (2.045307778s)
	I0729 13:37:38.870925  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:38.870921  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.045429354s)
	I0729 13:37:38.870946  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 13:37:38.871001  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0: (2.018116939s)
	I0729 13:37:38.871024  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.01808875s)
	I0729 13:37:38.871054  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871083  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.018080011s)
	I0729 13:37:38.871109  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 13:37:38.871120  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871056  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 13:37:38.871166  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871151  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871234  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0: (2.018278547s)
	I0729 13:37:38.871247  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:38.871259  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 13:37:38.871304  300746 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.283446632s)
	I0729 13:37:38.871343  300746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 13:37:38.871372  300746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:38.871406  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:38.871310  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:38.939395  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:38.939419  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 13:37:38.939532  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:40.939632  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.068434649s)
	I0729 13:37:40.939669  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 13:37:40.939693  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939702  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.068259157s)
	I0729 13:37:40.939734  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 13:37:40.939761  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939794  300746 ssh_runner.go:235] Completed: which crictl: (2.068372626s)
	I0729 13:37:40.939827  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.068564103s)
	I0729 13:37:40.939843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:40.939844  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.000295325s)
	I0729 13:37:40.939847  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 13:37:40.939856  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 13:37:40.999406  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 13:37:40.999505  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:43.015187  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.075399061s)
	I0729 13:37:43.015226  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 13:37:43.015243  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.015694914s)
	I0729 13:37:43.015259  300746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:43.015279  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 13:37:43.015313  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:42.273822  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274326  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:42.274251  302181 retry.go:31] will retry after 2.255017939s: waiting for machine to come up
	I0729 13:37:44.531432  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531845  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531873  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:44.531801  302181 retry.go:31] will retry after 2.272405743s: waiting for machine to come up
	I0729 13:37:46.401061  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.385713069s)
	I0729 13:37:46.401109  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 13:37:46.401147  300746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:46.401207  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:48.358628  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.9573934s)
	I0729 13:37:48.358659  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 13:37:48.358682  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:48.358733  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:46.806043  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806654  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806681  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:46.806599  302181 retry.go:31] will retry after 2.212726673s: waiting for machine to come up
	I0729 13:37:49.022244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022732  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022770  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:49.022677  302181 retry.go:31] will retry after 3.071460325s: waiting for machine to come up
	I0729 13:37:50.216727  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.857925776s)
	I0729 13:37:50.216769  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 13:37:50.216822  300746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.216879  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.862685  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 13:37:50.862738  300746 cache_images.go:123] Successfully loaded all cached images
	I0729 13:37:50.862746  300746 cache_images.go:92] duration metric: took 14.49548231s to LoadCachedImages
	I0729 13:37:50.862763  300746 kubeadm.go:934] updating node { 192.168.61.84 8443 v1.31.0-beta.0 crio true true} ...
	I0729 13:37:50.862924  300746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-566777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:50.863021  300746 ssh_runner.go:195] Run: crio config
	I0729 13:37:50.911526  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:50.911551  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:50.911563  300746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:50.911593  300746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.84 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-566777 NodeName:no-preload-566777 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:50.911782  300746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-566777"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:50.911856  300746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 13:37:50.922091  300746 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:50.922162  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:50.931275  300746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 13:37:50.947494  300746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 13:37:50.963108  300746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 13:37:50.979666  300746 ssh_runner.go:195] Run: grep 192.168.61.84	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:50.983215  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:50.994627  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:51.117275  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:51.134412  300746 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777 for IP: 192.168.61.84
	I0729 13:37:51.134439  300746 certs.go:194] generating shared ca certs ...
	I0729 13:37:51.134461  300746 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:51.134641  300746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:51.134692  300746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:51.134703  300746 certs.go:256] generating profile certs ...
	I0729 13:37:51.134825  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/client.key
	I0729 13:37:51.134901  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key.445c667e
	I0729 13:37:51.134962  300746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key
	I0729 13:37:51.135114  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:51.135153  300746 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:51.135166  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:51.135196  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:51.135225  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:51.135256  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:51.135309  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:51.136036  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:51.169507  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:51.201916  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:51.227860  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:51.263617  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:37:51.288105  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:37:51.314837  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:51.343892  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:37:51.367328  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:51.389470  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:51.411446  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:51.433270  300746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:51.448939  300746 ssh_runner.go:195] Run: openssl version
	I0729 13:37:51.454475  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:51.465080  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469541  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469605  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.475366  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:51.485979  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:51.496382  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500511  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500571  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.505997  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:51.516733  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:51.527637  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531754  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531797  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.537237  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:51.548006  300746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:51.552581  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:51.558414  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:51.563879  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:51.569869  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:51.575800  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:37:51.581525  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:37:51.587642  300746 kubeadm.go:392] StartCluster: {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:37:51.587777  300746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:37:51.587828  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.627118  300746 cri.go:89] found id: ""
	I0729 13:37:51.627212  300746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:37:51.637686  300746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:37:51.637711  300746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:37:51.637765  300746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:37:51.647368  300746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:37:51.648291  300746 kubeconfig.go:125] found "no-preload-566777" server: "https://192.168.61.84:8443"
	I0729 13:37:51.650296  300746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:37:51.659616  300746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.84
	I0729 13:37:51.659649  300746 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:37:51.659663  300746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:37:51.659714  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.700636  300746 cri.go:89] found id: ""
	I0729 13:37:51.700703  300746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:37:51.718225  300746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:37:51.728237  300746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:37:51.728257  300746 kubeadm.go:157] found existing configuration files:
	
	I0729 13:37:51.728303  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:37:51.738280  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:37:51.738364  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:37:51.748770  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:37:51.758572  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:37:51.758649  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:37:51.769634  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.779757  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:37:51.779827  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.790745  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:37:51.801212  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:37:51.801275  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:37:51.811706  300746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:37:51.821251  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:51.933905  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.401823  301425 start.go:364] duration metric: took 3m42.323534375s to acquireMachinesLock for "old-k8s-version-924039"
	I0729 13:37:53.401902  301425 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:53.401914  301425 fix.go:54] fixHost starting: 
	I0729 13:37:53.402310  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:53.402344  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:53.421973  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0729 13:37:53.422456  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:53.423079  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:37:53.423112  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:53.423508  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:53.423734  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:37:53.423883  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetState
	I0729 13:37:53.425687  301425 fix.go:112] recreateIfNeeded on old-k8s-version-924039: state=Stopped err=<nil>
	I0729 13:37:53.425733  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	W0729 13:37:53.425902  301425 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:53.427931  301425 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-924039" ...
	I0729 13:37:52.097443  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.097870  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Found IP for machine: 192.168.50.34
	I0729 13:37:52.097904  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserving static IP address...
	I0729 13:37:52.097923  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has current primary IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.098329  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.098357  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserved static IP address: 192.168.50.34
	I0729 13:37:52.098377  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | skip adding static IP to network mk-default-k8s-diff-port-972693 - found existing host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"}
	I0729 13:37:52.098406  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for SSH to be available...
	I0729 13:37:52.098423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Getting to WaitForSSH function...
	I0729 13:37:52.100530  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.100878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.100908  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.101029  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH client type: external
	I0729 13:37:52.101062  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa (-rw-------)
	I0729 13:37:52.101106  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:52.101121  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | About to run SSH command:
	I0729 13:37:52.101145  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | exit 0
	I0729 13:37:52.225041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:52.225381  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetConfigRaw
	I0729 13:37:52.226001  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.228722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229109  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.229140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229315  301044 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/config.json ...
	I0729 13:37:52.229522  301044 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:52.229541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:52.229716  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.231823  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.232181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.232446  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232613  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232758  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.232913  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.233100  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.233111  301044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:52.336948  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:52.336978  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337288  301044 buildroot.go:166] provisioning hostname "default-k8s-diff-port-972693"
	I0729 13:37:52.337321  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337552  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.340284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340598  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.340623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340724  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.340913  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341090  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341261  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.341419  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.341591  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.341603  301044 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-972693 && echo "default-k8s-diff-port-972693" | sudo tee /etc/hostname
	I0729 13:37:52.455264  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-972693
	
	I0729 13:37:52.455294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.457937  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458304  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.458332  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458465  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.458667  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458857  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458995  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.459170  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.459352  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.459376  301044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-972693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-972693/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-972693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:52.570543  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:52.570578  301044 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:52.570603  301044 buildroot.go:174] setting up certificates
	I0729 13:37:52.570617  301044 provision.go:84] configureAuth start
	I0729 13:37:52.570628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.570900  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.573309  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573609  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.573641  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573751  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.575826  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.576177  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576344  301044 provision.go:143] copyHostCerts
	I0729 13:37:52.576414  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:52.576483  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:52.576568  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:52.576698  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:52.576707  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:52.576728  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:52.576786  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:52.576815  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:52.576845  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:52.576902  301044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-972693 san=[127.0.0.1 192.168.50.34 default-k8s-diff-port-972693 localhost minikube]
	I0729 13:37:52.764928  301044 provision.go:177] copyRemoteCerts
	I0729 13:37:52.764988  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:52.765018  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.767540  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.767842  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.767872  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.768041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.768213  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.768362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.768474  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:52.847615  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:52.877666  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 13:37:52.901219  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:37:52.924922  301044 provision.go:87] duration metric: took 354.279838ms to configureAuth
	I0729 13:37:52.924953  301044 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:52.925157  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:37:52.925244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.927791  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.928181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.928533  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928830  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.928978  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.929208  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.929230  301044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:53.176359  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:53.176391  301044 machine.go:97] duration metric: took 946.853063ms to provisionDockerMachine
	I0729 13:37:53.176404  301044 start.go:293] postStartSetup for "default-k8s-diff-port-972693" (driver="kvm2")
	I0729 13:37:53.176419  301044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:53.176441  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.176782  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:53.176818  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.179340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.179698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179858  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.180053  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.180214  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.180336  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.259826  301044 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:53.264059  301044 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:53.264087  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:53.264155  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:53.264239  301044 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:53.264345  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:53.273954  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:53.297340  301044 start.go:296] duration metric: took 120.913486ms for postStartSetup
	I0729 13:37:53.297392  301044 fix.go:56] duration metric: took 18.435815853s for fixHost
	I0729 13:37:53.297421  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.299859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300187  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.300218  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.300576  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300755  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300932  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.301116  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:53.301314  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:53.301324  301044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:53.401628  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260273.369344581
	
	I0729 13:37:53.401671  301044 fix.go:216] guest clock: 1722260273.369344581
	I0729 13:37:53.401682  301044 fix.go:229] Guest: 2024-07-29 13:37:53.369344581 +0000 UTC Remote: 2024-07-29 13:37:53.297397345 +0000 UTC m=+281.644280810 (delta=71.947236ms)
	I0729 13:37:53.401705  301044 fix.go:200] guest clock delta is within tolerance: 71.947236ms
	I0729 13:37:53.401711  301044 start.go:83] releasing machines lock for "default-k8s-diff-port-972693", held for 18.540175489s
	I0729 13:37:53.401760  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.402061  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:53.404813  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405182  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.405207  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405359  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.405844  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406153  301044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:53.406210  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.406289  301044 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:53.406315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.409060  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409351  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409460  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.409814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.409878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409909  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409992  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410092  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.410183  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.410315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.410435  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410631  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.510289  301044 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:53.517635  301044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:53.660575  301044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:53.668128  301044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:53.668207  301044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:53.690732  301044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:53.690764  301044 start.go:495] detecting cgroup driver to use...
	I0729 13:37:53.690838  301044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:53.707461  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:53.721922  301044 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:53.722004  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:53.740941  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:53.759323  301044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:53.900344  301044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:54.065647  301044 docker.go:233] disabling docker service ...
	I0729 13:37:54.065780  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:54.082468  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:54.098283  301044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:54.213104  301044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:54.339560  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:54.360412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:54.384836  301044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:37:54.384900  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.400889  301044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:54.400980  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.416941  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.433090  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.449306  301044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:54.461742  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.477135  301044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.501431  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.519646  301044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:54.532995  301044 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:54.533074  301044 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:54.550639  301044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:54.561896  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:54.710789  301044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:54.885480  301044 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:54.885558  301044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:54.890556  301044 start.go:563] Will wait 60s for crictl version
	I0729 13:37:54.890629  301044 ssh_runner.go:195] Run: which crictl
	I0729 13:37:54.894644  301044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:54.941141  301044 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:54.941236  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:54.983380  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:55.027770  301044 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:37:53.429298  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .Start
	I0729 13:37:53.429471  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring networks are active...
	I0729 13:37:53.430263  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network default is active
	I0729 13:37:53.430649  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network mk-old-k8s-version-924039 is active
	I0729 13:37:53.431011  301425 main.go:141] libmachine: (old-k8s-version-924039) Getting domain xml...
	I0729 13:37:53.431825  301425 main.go:141] libmachine: (old-k8s-version-924039) Creating domain...
	I0729 13:37:54.749878  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting to get IP...
	I0729 13:37:54.751148  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.751716  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.751784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.751696  302377 retry.go:31] will retry after 230.330776ms: waiting for machine to come up
	I0729 13:37:54.984551  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.985138  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.985183  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.985094  302377 retry.go:31] will retry after 291.000555ms: waiting for machine to come up
	I0729 13:37:55.277730  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.278199  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.278220  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.278152  302377 retry.go:31] will retry after 360.474919ms: waiting for machine to come up
	I0729 13:37:55.640675  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.641255  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.641288  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.641207  302377 retry.go:31] will retry after 480.424143ms: waiting for machine to come up
	I0729 13:37:55.029239  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:55.032722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033225  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:55.033257  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033668  301044 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:55.038429  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:55.056198  301044 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:55.056373  301044 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:37:55.056440  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:55.100534  301044 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:37:55.100612  301044 ssh_runner.go:195] Run: which lz4
	I0729 13:37:55.105708  301044 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:37:55.110384  301044 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:37:55.110417  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:37:56.630726  301044 crio.go:462] duration metric: took 1.525047583s to copy over tarball
	I0729 13:37:56.630816  301044 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:37:53.446825  300746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.51288234s)
	I0729 13:37:53.446866  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.663105  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.740482  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.823641  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:37:53.823753  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.324001  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.824299  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.933931  300746 api_server.go:72] duration metric: took 1.11028623s to wait for apiserver process to appear ...
	I0729 13:37:54.933969  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:37:54.933996  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:54.934563  300746 api_server.go:269] stopped: https://192.168.61.84:8443/healthz: Get "https://192.168.61.84:8443/healthz": dial tcp 192.168.61.84:8443: connect: connection refused
	I0729 13:37:55.434598  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.005676  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.005719  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.005737  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.066371  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.066408  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.434268  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.439205  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.439240  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:58.934796  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.944368  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.944399  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.434576  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.443061  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:59.443098  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.934805  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.943892  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:37:59.955156  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:37:59.955185  300746 api_server.go:131] duration metric: took 5.021207326s to wait for apiserver health ...
	I0729 13:37:59.955197  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.955205  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:00.307264  300746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:37:56.123854  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.124460  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.124487  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.124433  302377 retry.go:31] will retry after 529.614291ms: waiting for machine to come up
	I0729 13:37:56.656136  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.656626  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.656657  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.656599  302377 retry.go:31] will retry after 794.429248ms: waiting for machine to come up
	I0729 13:37:57.452523  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:57.453001  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:57.453033  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:57.452952  302377 retry.go:31] will retry after 1.140583184s: waiting for machine to come up
	I0729 13:37:58.594636  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:58.595067  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:58.595109  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:58.595024  302377 retry.go:31] will retry after 894.563974ms: waiting for machine to come up
	I0729 13:37:59.491447  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:59.492094  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:59.492120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:59.491993  302377 retry.go:31] will retry after 1.145531829s: waiting for machine to come up
	I0729 13:38:00.639387  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:00.639807  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:00.639838  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:00.639754  302377 retry.go:31] will retry after 1.949675091s: waiting for machine to come up
	I0729 13:37:58.983188  301044 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.352336314s)
	I0729 13:37:58.983233  301044 crio.go:469] duration metric: took 2.352468802s to extract the tarball
	I0729 13:37:58.983245  301044 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:37:59.022539  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:59.086881  301044 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:37:59.086913  301044 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:37:59.086924  301044 kubeadm.go:934] updating node { 192.168.50.34 8444 v1.30.3 crio true true} ...
	I0729 13:37:59.087062  301044 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-972693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:59.087158  301044 ssh_runner.go:195] Run: crio config
	I0729 13:37:59.144128  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.144163  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:59.144182  301044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:59.144209  301044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.34 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-972693 NodeName:default-k8s-diff-port-972693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:59.144376  301044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.34
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-972693"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:59.144452  301044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:37:59.154648  301044 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:59.154717  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:59.164572  301044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0729 13:37:59.182967  301044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:37:59.202507  301044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0729 13:37:59.221603  301044 ssh_runner.go:195] Run: grep 192.168.50.34	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:59.226646  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:59.244199  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:59.390312  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:59.411152  301044 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693 for IP: 192.168.50.34
	I0729 13:37:59.411178  301044 certs.go:194] generating shared ca certs ...
	I0729 13:37:59.411213  301044 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:59.411421  301044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:59.411481  301044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:59.411495  301044 certs.go:256] generating profile certs ...
	I0729 13:37:59.411614  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/client.key
	I0729 13:37:59.411709  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key.0cff1f82
	I0729 13:37:59.411780  301044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key
	I0729 13:37:59.411977  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:59.412036  301044 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:59.412052  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:59.412090  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:59.412124  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:59.412156  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:59.412221  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:59.413262  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:59.450186  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:59.496339  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:59.535462  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:59.569433  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 13:37:59.602826  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:37:59.639581  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:59.672966  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:37:59.707007  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:59.741894  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:59.771364  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:59.802928  301044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:59.828730  301044 ssh_runner.go:195] Run: openssl version
	I0729 13:37:59.837356  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:59.855071  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861707  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861781  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.870815  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:59.884842  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:59.899473  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904238  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904312  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.910221  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:59.923542  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:59.936729  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943440  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943496  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.951099  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:59.964578  301044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:59.969476  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:59.975715  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:59.981719  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:59.987788  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:59.993753  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:00.000228  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:00.007898  301044 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:00.008033  301044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:00.008091  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.054999  301044 cri.go:89] found id: ""
	I0729 13:38:00.055097  301044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:00.069066  301044 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:00.069090  301044 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:00.069148  301044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:00.083486  301044 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:00.084538  301044 kubeconfig.go:125] found "default-k8s-diff-port-972693" server: "https://192.168.50.34:8444"
	I0729 13:38:00.086623  301044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:00.099514  301044 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.34
	I0729 13:38:00.099555  301044 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:00.099570  301044 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:00.099644  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.137643  301044 cri.go:89] found id: ""
	I0729 13:38:00.137726  301044 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:00.157036  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:00.168591  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:00.168614  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:00.168664  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:38:00.178379  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:00.178449  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:00.189688  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:38:00.199323  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:00.199388  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:00.209351  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.219100  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:00.219171  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.228754  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:38:00.238453  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:00.238526  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:00.248479  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:00.258717  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.377121  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.413128  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:00.424610  300746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:00.446537  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:01.601214  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:01.601265  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:01.601278  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:01.601296  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:01.601305  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:01.601312  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:38:01.601323  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:01.601332  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:01.601346  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:01.601357  300746 system_pods.go:74] duration metric: took 1.154789909s to wait for pod list to return data ...
	I0729 13:38:01.601370  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:02.057111  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:02.057149  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:02.057182  300746 node_conditions.go:105] duration metric: took 455.806302ms to run NodePressure ...
	I0729 13:38:02.057210  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.420014  300746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426444  300746 kubeadm.go:739] kubelet initialised
	I0729 13:38:02.426467  300746 kubeadm.go:740] duration metric: took 6.420611ms waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426478  300746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:02.431168  300746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.436892  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436916  300746 pod_ready.go:81] duration metric: took 5.728016ms for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.436925  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436932  300746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.443079  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443102  300746 pod_ready.go:81] duration metric: took 6.163444ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.443110  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443115  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.447945  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447964  300746 pod_ready.go:81] duration metric: took 4.843364ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.447973  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447980  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.457004  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457027  300746 pod_ready.go:81] duration metric: took 9.037058ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.457038  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457045  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.825208  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825246  300746 pod_ready.go:81] duration metric: took 368.180356ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.825259  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825268  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.225868  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.225975  300746 pod_ready.go:81] duration metric: took 400.697293ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.225993  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.226003  300746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.627568  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627605  300746 pod_ready.go:81] duration metric: took 401.589314ms for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.627618  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627628  300746 pod_ready.go:38] duration metric: took 1.201138036s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:03.627651  300746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:03.646855  300746 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:03.646893  300746 kubeadm.go:597] duration metric: took 12.009173344s to restartPrimaryControlPlane
	I0729 13:38:03.646910  300746 kubeadm.go:394] duration metric: took 12.059279913s to StartCluster
	I0729 13:38:03.646936  300746 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.647029  300746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:03.649213  300746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.649527  300746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:03.649810  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:38:03.649861  300746 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:03.649931  300746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-566777"
	I0729 13:38:03.649962  300746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-566777"
	W0729 13:38:03.649974  300746 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:03.650021  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650400  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.650428  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.650493  300746 addons.go:69] Setting default-storageclass=true in profile "no-preload-566777"
	I0729 13:38:03.650533  300746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-566777"
	I0729 13:38:03.650601  300746 addons.go:69] Setting metrics-server=true in profile "no-preload-566777"
	I0729 13:38:03.650631  300746 addons.go:234] Setting addon metrics-server=true in "no-preload-566777"
	W0729 13:38:03.650642  300746 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:03.650675  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650985  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651014  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651029  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651054  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651324  300746 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:03.652887  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:03.670088  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0729 13:38:03.670283  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0729 13:38:03.670694  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.670769  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.671418  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671423  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671437  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671440  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671755  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0729 13:38:03.671900  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.671927  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.672491  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.672515  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.672711  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.673183  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.673207  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.673468  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.673480  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.673857  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.674012  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.677726  300746 addons.go:234] Setting addon default-storageclass=true in "no-preload-566777"
	W0729 13:38:03.677746  300746 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:03.677777  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.678133  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.678151  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.692817  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0729 13:38:03.693446  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.693919  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.693945  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.694335  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.694504  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.694718  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0729 13:38:03.695225  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.695726  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.695744  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.696028  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.696154  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.696514  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.697635  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.698597  300746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:03.699466  300746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:03.700447  300746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:03.700463  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:03.700481  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.701375  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:03.701390  300746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:03.701404  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.705199  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705225  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705844  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705866  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705893  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705911  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705946  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706143  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706313  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.706471  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.706755  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.708988  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.710193  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I0729 13:38:03.710735  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.711282  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.711296  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.711684  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.712271  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.712322  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.712966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.713103  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.756710  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I0729 13:38:03.757254  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.757760  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.757784  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.758125  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.758376  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.760315  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.760577  300746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:03.760594  300746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:03.760612  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.763679  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.764208  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.764277  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.765045  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.765227  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.765386  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.765546  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.883257  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:03.905104  300746 node_ready.go:35] waiting up to 6m0s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:03.985382  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:03.985412  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:04.014094  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:04.014119  300746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:04.016390  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:04.047695  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:04.062249  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:04.062328  300746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:04.095999  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:05.473341  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4569173s)
	I0729 13:38:05.473396  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473409  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.473421  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.425688075s)
	I0729 13:38:05.473547  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473558  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474089  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.474117  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474129  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474133  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474137  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474142  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474158  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474148  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474213  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.475707  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.475738  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.475746  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.476002  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.476095  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.476124  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.490038  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.490081  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.490420  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.490440  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562064  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46596112s)
	I0729 13:38:05.562122  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562136  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.562492  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.562516  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562532  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562541  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.564397  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.564410  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.564448  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.564471  300746 addons.go:475] Verifying addon metrics-server=true in "no-preload-566777"
	I0729 13:38:05.566888  300746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:38:02.590640  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:02.591134  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:02.591162  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:02.591087  302377 retry.go:31] will retry after 1.765945358s: waiting for machine to come up
	I0729 13:38:04.358332  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:04.358934  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:04.358963  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:04.358899  302377 retry.go:31] will retry after 2.923224015s: waiting for machine to come up
	I0729 13:38:01.713425  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.33625836s)
	I0729 13:38:01.713462  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:01.941164  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.017707  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.134991  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:02.135105  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:02.636248  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.135563  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.264470  301044 api_server.go:72] duration metric: took 1.129485078s to wait for apiserver process to appear ...
	I0729 13:38:03.264512  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:03.264545  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.392570  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.392609  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.392626  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.423076  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.423120  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.764837  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.770393  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:06.770428  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.264879  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.269632  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:07.269670  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.764878  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.770291  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:38:07.781660  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:07.781691  301044 api_server.go:131] duration metric: took 4.517171532s to wait for apiserver health ...
	I0729 13:38:07.781700  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:38:07.781707  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:07.784769  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:05.568441  300746 addons.go:510] duration metric: took 1.918571396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:38:05.916109  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:07.284234  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:07.284764  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:07.284819  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:07.284694  302377 retry.go:31] will retry after 2.9786525s: waiting for machine to come up
	I0729 13:38:10.265771  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:10.266128  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:10.266161  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:10.266077  302377 retry.go:31] will retry after 5.044155966s: waiting for machine to come up
	I0729 13:38:07.786038  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:07.824838  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:07.850139  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:07.862900  301044 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:07.862932  301044 system_pods.go:61] "coredns-7db6d8ff4d-zllk5" [3ebb659a-7849-498b-a81c-54f75c8e1536] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:07.862943  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [fc5c7286-5cd4-4eeb-879e-6263f82c4164] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:07.862950  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [a3a13c0b-844d-4a5b-93a0-fb9784b4b095] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:07.862957  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4e6c469d-b2a5-4ec2-95a4-01b6ad7de347] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:07.862964  301044 system_pods.go:61] "kube-proxy-6hxkb" [42b01d8b-9a37-40d0-ac32-09e3e261f953] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:07.862979  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [2373a650-57bb-4dc3-96ab-7f6cd040c148] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:07.862985  301044 system_pods.go:61] "metrics-server-569cc877fc-dlrjb" [360087fa-273d-4ba8-a299-54678724c45e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:07.862990  301044 system_pods.go:61] "storage-provisioner" [3e3fb5ef-6761-4671-a093-8616241cd98f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:07.862996  301044 system_pods.go:74] duration metric: took 12.833023ms to wait for pod list to return data ...
	I0729 13:38:07.863007  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:07.868359  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:07.868385  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:07.868395  301044 node_conditions.go:105] duration metric: took 5.383164ms to run NodePressure ...
	I0729 13:38:07.868412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:08.166890  301044 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175546  301044 kubeadm.go:739] kubelet initialised
	I0729 13:38:08.175570  301044 kubeadm.go:740] duration metric: took 8.646638ms waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175588  301044 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.186944  301044 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.194446  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194479  301044 pod_ready.go:81] duration metric: took 7.500494ms for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.194487  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194495  301044 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.202341  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202366  301044 pod_ready.go:81] duration metric: took 7.863125ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.202380  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202388  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.209017  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209041  301044 pod_ready.go:81] duration metric: took 6.646309ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.209051  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209057  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.256503  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256530  301044 pod_ready.go:81] duration metric: took 47.465005ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.256543  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256552  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652875  301044 pod_ready.go:92] pod "kube-proxy-6hxkb" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:08.652901  301044 pod_ready.go:81] duration metric: took 396.340654ms for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652912  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.658352  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:08.411629  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:08.908602  300746 node_ready.go:49] node "no-preload-566777" has status "Ready":"True"
	I0729 13:38:08.908629  300746 node_ready.go:38] duration metric: took 5.003487604s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:08.908639  300746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.914468  300746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.921796  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.313102  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313621  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has current primary IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313650  301425 main.go:141] libmachine: (old-k8s-version-924039) Found IP for machine: 192.168.39.227
	I0729 13:38:15.313665  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserving static IP address...
	I0729 13:38:15.314120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.314168  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | skip adding static IP to network mk-old-k8s-version-924039 - found existing host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"}
	I0729 13:38:15.314187  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserved static IP address: 192.168.39.227
	I0729 13:38:15.314205  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting for SSH to be available...
	I0729 13:38:15.314219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Getting to WaitForSSH function...
	I0729 13:38:15.316468  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316779  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.316827  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316994  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH client type: external
	I0729 13:38:15.317013  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa (-rw-------)
	I0729 13:38:15.317042  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:15.317054  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | About to run SSH command:
	I0729 13:38:15.317076  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | exit 0
	I0729 13:38:15.444818  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:15.445203  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetConfigRaw
	I0729 13:38:15.445858  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.448296  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.448784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.448834  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.449028  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:38:15.449208  301425 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:15.449226  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:15.449469  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.451695  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452017  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.452046  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.452420  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452606  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452770  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.452945  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.453151  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.453165  301425 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:15.561558  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:15.561590  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.561859  301425 buildroot.go:166] provisioning hostname "old-k8s-version-924039"
	I0729 13:38:15.561887  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.562079  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.564776  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565116  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.565157  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565286  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.565495  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565669  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565805  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.565952  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.566129  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.566140  301425 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-924039 && echo "old-k8s-version-924039" | sudo tee /etc/hostname
	I0729 13:38:15.687712  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-924039
	
	I0729 13:38:15.687744  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.690289  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690614  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.690638  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690864  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.691104  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691290  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691463  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.691649  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.691841  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.691869  301425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-924039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-924039/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-924039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:15.814102  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:15.814140  301425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:15.814190  301425 buildroot.go:174] setting up certificates
	I0729 13:38:15.814198  301425 provision.go:84] configureAuth start
	I0729 13:38:15.814210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.814521  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.817140  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817548  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.817583  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817728  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.819957  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820307  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.820335  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820476  301425 provision.go:143] copyHostCerts
	I0729 13:38:15.820529  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:15.820539  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:15.820592  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:15.820685  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:15.820693  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:15.820713  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:15.820772  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:15.820779  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:15.820828  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:15.820909  301425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-924039 san=[127.0.0.1 192.168.39.227 localhost minikube old-k8s-version-924039]
	I0729 13:38:15.895797  301425 provision.go:177] copyRemoteCerts
	I0729 13:38:15.895866  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:15.895898  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.898774  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899173  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.899214  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899444  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.899672  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.899882  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.900048  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.606081  300705 start.go:364] duration metric: took 56.40993179s to acquireMachinesLock for "embed-certs-135920"
	I0729 13:38:16.606131  300705 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:38:16.606139  300705 fix.go:54] fixHost starting: 
	I0729 13:38:16.606611  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:16.606652  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:16.626502  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
	I0729 13:38:16.626989  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:16.627491  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:16.627511  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:16.627897  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:16.628100  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:16.628242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:16.629856  300705 fix.go:112] recreateIfNeeded on embed-certs-135920: state=Stopped err=<nil>
	I0729 13:38:16.629879  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	W0729 13:38:16.630046  300705 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:38:16.632177  300705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-135920" ...
	I0729 13:38:12.659133  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.159457  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.159792  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.159818  301044 pod_ready.go:81] duration metric: took 7.506898395s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.159827  301044 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.633625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Start
	I0729 13:38:16.633803  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring networks are active...
	I0729 13:38:16.634580  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network default is active
	I0729 13:38:16.634947  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network mk-embed-certs-135920 is active
	I0729 13:38:16.635454  300705 main.go:141] libmachine: (embed-certs-135920) Getting domain xml...
	I0729 13:38:16.636201  300705 main.go:141] libmachine: (embed-certs-135920) Creating domain...
	I0729 13:38:15.988091  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:16.019058  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 13:38:16.047266  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:16.072992  301425 provision.go:87] duration metric: took 258.777499ms to configureAuth
	I0729 13:38:16.073029  301425 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:16.073250  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:38:16.073338  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.075801  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.076219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076350  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.076560  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076750  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076972  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.077169  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.077354  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.077369  301425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:16.357614  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:16.357650  301425 machine.go:97] duration metric: took 908.424232ms to provisionDockerMachine
	I0729 13:38:16.357666  301425 start.go:293] postStartSetup for "old-k8s-version-924039" (driver="kvm2")
	I0729 13:38:16.357680  301425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:16.357706  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.358060  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:16.358089  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.360841  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361257  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.361314  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361410  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.361645  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.361821  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.361987  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.448673  301425 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:16.453435  301425 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:16.453461  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:16.453543  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:16.453638  301425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:16.453763  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:16.464185  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:16.490358  301425 start.go:296] duration metric: took 132.675687ms for postStartSetup
	I0729 13:38:16.490422  301425 fix.go:56] duration metric: took 23.088507704s for fixHost
	I0729 13:38:16.490450  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.493249  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493571  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.493612  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493781  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.494046  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494241  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494388  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.494561  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.494759  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.494769  301425 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:16.605903  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260296.583363181
	
	I0729 13:38:16.605930  301425 fix.go:216] guest clock: 1722260296.583363181
	I0729 13:38:16.605940  301425 fix.go:229] Guest: 2024-07-29 13:38:16.583363181 +0000 UTC Remote: 2024-07-29 13:38:16.490427183 +0000 UTC m=+245.556685019 (delta=92.935998ms)
	I0729 13:38:16.605967  301425 fix.go:200] guest clock delta is within tolerance: 92.935998ms
	I0729 13:38:16.605974  301425 start.go:83] releasing machines lock for "old-k8s-version-924039", held for 23.204101255s
	I0729 13:38:16.606006  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.606296  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:16.609324  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609669  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.609701  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609826  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610328  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610516  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610589  301425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:16.610673  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.610758  301425 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:16.610786  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.613356  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613639  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613689  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.613712  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613910  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614092  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.614112  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.614122  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614287  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614307  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614449  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.614496  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614635  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614771  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.719174  301425 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:16.726348  301425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:16.880130  301425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:16.886410  301425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:16.886484  301425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:16.904120  301425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:16.904151  301425 start.go:495] detecting cgroup driver to use...
	I0729 13:38:16.904222  301425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:16.927036  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:16.947380  301425 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:16.947448  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:16.964612  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:16.979266  301425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:17.108950  301425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:17.263118  301425 docker.go:233] disabling docker service ...
	I0729 13:38:17.263192  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:17.282563  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:17.299473  301425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:17.448598  301425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:17.568025  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:17.583700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:17.603159  301425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 13:38:17.603223  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.615655  301425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:17.615728  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.628639  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.640456  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.652160  301425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:17.663864  301425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:17.675293  301425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:17.675361  301425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:17.690427  301425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:17.702163  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:17.831401  301425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:17.985760  301425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:17.985851  301425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:17.990740  301425 start.go:563] Will wait 60s for crictl version
	I0729 13:38:17.990798  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:17.994741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:18.035793  301425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:18.035889  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.065036  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.097441  301425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 13:38:13.421995  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.944090  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.933596  300746 pod_ready.go:92] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.933621  300746 pod_ready.go:81] duration metric: took 8.019124005s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.933634  300746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943434  300746 pod_ready.go:92] pod "etcd-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.943465  300746 pod_ready.go:81] duration metric: took 9.816863ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943478  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952623  300746 pod_ready.go:92] pod "kube-apiserver-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.952644  300746 pod_ready.go:81] duration metric: took 9.157998ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952653  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.956989  300746 pod_ready.go:92] pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.957010  300746 pod_ready.go:81] duration metric: took 4.350015ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.957023  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962772  300746 pod_ready.go:92] pod "kube-proxy-ql6wf" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.962796  300746 pod_ready.go:81] duration metric: took 5.763769ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962807  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318604  300746 pod_ready.go:92] pod "kube-scheduler-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:17.318632  300746 pod_ready.go:81] duration metric: took 355.816982ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318642  300746 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:18.098840  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:18.102182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102629  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:18.102665  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102925  301425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:18.107544  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:18.122039  301425 kubeadm.go:883] updating cluster {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:18.122176  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:38:18.122249  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:18.169198  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:18.169279  301425 ssh_runner.go:195] Run: which lz4
	I0729 13:38:18.173861  301425 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:18.178840  301425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:18.178881  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 13:38:19.887360  301425 crio.go:462] duration metric: took 1.713549828s to copy over tarball
	I0729 13:38:19.887450  301425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:18.167033  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:20.168009  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:17.933984  300705 main.go:141] libmachine: (embed-certs-135920) Waiting to get IP...
	I0729 13:38:17.935033  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:17.935595  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:17.935652  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:17.935560  302586 retry.go:31] will retry after 195.331915ms: waiting for machine to come up
	I0729 13:38:18.133074  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.133566  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.133592  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.133513  302586 retry.go:31] will retry after 348.993714ms: waiting for machine to come up
	I0729 13:38:18.484164  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.484746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.484771  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.484703  302586 retry.go:31] will retry after 372.899167ms: waiting for machine to come up
	I0729 13:38:18.859212  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.859721  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.859746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.859672  302586 retry.go:31] will retry after 415.38859ms: waiting for machine to come up
	I0729 13:38:19.276241  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.276785  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.276816  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.276715  302586 retry.go:31] will retry after 553.262343ms: waiting for machine to come up
	I0729 13:38:19.831475  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.831994  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.832030  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.831949  302586 retry.go:31] will retry after 579.574559ms: waiting for machine to come up
	I0729 13:38:20.412838  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:20.413273  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:20.413302  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:20.413225  302586 retry.go:31] will retry after 908.712618ms: waiting for machine to come up
	I0729 13:38:21.324197  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:21.324824  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:21.324849  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:21.324723  302586 retry.go:31] will retry after 1.4226484s: waiting for machine to come up
	I0729 13:38:19.328753  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:21.330005  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.836067  301425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.948583188s)
	I0729 13:38:22.836104  301425 crio.go:469] duration metric: took 2.948710335s to extract the tarball
	I0729 13:38:22.836114  301425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:22.878370  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:22.921339  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:22.921370  301425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:38:22.921445  301425 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.921545  301425 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.921547  301425 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 13:38:22.921633  301425 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:22.921475  301425 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.921479  301425 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923052  301425 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 13:38:22.923712  301425 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.923723  301425 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923733  301425 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.923743  301425 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.923803  301425 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.923923  301425 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.923976  301425 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.079335  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.095210  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.096664  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.109172  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.111720  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.114386  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.200545  301425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 13:38:23.200629  301425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.200698  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.203884  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 13:38:23.261424  301425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 13:38:23.261500  301425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.261528  301425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 13:38:23.261561  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.261569  301425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.261610  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.267971  301425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 13:38:23.268018  301425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.268075  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317322  301425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 13:38:23.317369  301425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.317387  301425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 13:38:23.317422  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317441  301425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.317440  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.317489  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317507  301425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 13:38:23.317530  301425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 13:38:23.317551  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.317588  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.317553  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317683  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.322770  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.432764  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 13:38:23.432833  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 13:38:23.432877  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.442661  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 13:38:23.442741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 13:38:23.442785  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 13:38:23.442825  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 13:38:23.481401  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 13:38:23.484727  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 13:38:24.057020  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:24.203622  301425 cache_images.go:92] duration metric: took 1.282232497s to LoadCachedImages
	W0729 13:38:24.203724  301425 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 13:38:24.203742  301425 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.20.0 crio true true} ...
	I0729 13:38:24.203883  301425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-924039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:24.203996  301425 ssh_runner.go:195] Run: crio config
	I0729 13:38:24.274480  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:38:24.274531  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:24.274547  301425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:24.274582  301425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-924039 NodeName:old-k8s-version-924039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 13:38:24.274784  301425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-924039"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:24.274863  301425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 13:38:24.285241  301425 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:24.285333  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:24.294677  301425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0729 13:38:24.311572  301425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:24.328768  301425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 13:38:24.346849  301425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:24.351047  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:24.364302  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:24.502947  301425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:24.524583  301425 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039 for IP: 192.168.39.227
	I0729 13:38:24.524610  301425 certs.go:194] generating shared ca certs ...
	I0729 13:38:24.524626  301425 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:24.524831  301425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:24.524889  301425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:24.524908  301425 certs.go:256] generating profile certs ...
	I0729 13:38:24.525030  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.key
	I0729 13:38:24.525090  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b
	I0729 13:38:24.525143  301425 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key
	I0729 13:38:24.525300  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:24.525345  301425 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:24.525359  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:24.525390  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:24.525416  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:24.525440  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:24.525495  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:24.526416  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:24.593901  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:24.641443  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:24.679927  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:24.740839  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 13:38:24.779899  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:38:24.814327  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:24.842166  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:38:24.868619  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:24.894053  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:24.921437  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:24.947676  301425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:24.966469  301425 ssh_runner.go:195] Run: openssl version
	I0729 13:38:24.972780  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:24.985676  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990293  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990356  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.996523  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:25.007631  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:25.018369  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022779  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022840  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.028471  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:25.039307  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:25.050190  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054731  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054799  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.060568  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:25.071531  301425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:25.076195  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:25.082194  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:25.088573  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:25.095625  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:25.101900  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:25.107797  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:25.113775  301425 kubeadm.go:392] StartCluster: {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:25.113903  301425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:25.113975  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.159804  301425 cri.go:89] found id: ""
	I0729 13:38:25.159887  301425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:25.172248  301425 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:25.172271  301425 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:25.172321  301425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:25.182852  301425 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:25.184249  301425 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:25.186246  301425 kubeconfig.go:62] /home/jenkins/minikube-integration/19341-233093/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-924039" cluster setting kubeconfig missing "old-k8s-version-924039" context setting]
	I0729 13:38:25.188334  301425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:25.262355  301425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:25.274019  301425 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0729 13:38:25.274063  301425 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:25.274078  301425 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:25.274141  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.311295  301425 cri.go:89] found id: ""
	I0729 13:38:25.311365  301425 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:25.330380  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:25.343607  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:25.343651  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:25.343709  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:25.356979  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:25.357048  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:25.370453  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:25.386234  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:25.386308  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:25.403905  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.413906  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:25.414011  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.431532  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:25.448250  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:25.448325  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:25.459773  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:25.469841  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:25.584845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:22.667857  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:24.668022  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.748882  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:22.749346  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:22.749368  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:22.749292  302586 retry.go:31] will retry after 1.460248931s: waiting for machine to come up
	I0729 13:38:24.212019  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:24.212538  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:24.212567  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:24.212479  302586 retry.go:31] will retry after 1.462429402s: waiting for machine to come up
	I0729 13:38:25.676972  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:25.677407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:25.677429  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:25.677368  302586 retry.go:31] will retry after 2.551129627s: waiting for machine to come up
	I0729 13:38:23.826435  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:25.826981  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.325176  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:26.367294  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.618571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.775377  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.860948  301425 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:26.861038  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.361227  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.362003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.861172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.361165  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.861469  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.361306  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.861442  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.167961  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:29.667405  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.230763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:28.231276  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:28.231299  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:28.231239  302586 retry.go:31] will retry after 2.333059097s: waiting for machine to come up
	I0729 13:38:30.566386  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:30.566786  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:30.566815  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:30.566733  302586 retry.go:31] will retry after 3.717362174s: waiting for machine to come up
	I0729 13:38:30.326143  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:32.825635  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:31.361866  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:31.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.361776  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.862004  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.361883  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.862010  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.362013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.861958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.361390  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.861465  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.165082  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.165674  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.165885  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.288242  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288935  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has current primary IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288968  300705 main.go:141] libmachine: (embed-certs-135920) Found IP for machine: 192.168.72.207
	I0729 13:38:34.288987  300705 main.go:141] libmachine: (embed-certs-135920) Reserving static IP address...
	I0729 13:38:34.289557  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.289586  300705 main.go:141] libmachine: (embed-certs-135920) Reserved static IP address: 192.168.72.207
	I0729 13:38:34.289604  300705 main.go:141] libmachine: (embed-certs-135920) DBG | skip adding static IP to network mk-embed-certs-135920 - found existing host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"}
	I0729 13:38:34.289619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Getting to WaitForSSH function...
	I0729 13:38:34.289635  300705 main.go:141] libmachine: (embed-certs-135920) Waiting for SSH to be available...
	I0729 13:38:34.291951  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292308  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.292340  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292589  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH client type: external
	I0729 13:38:34.292619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa (-rw-------)
	I0729 13:38:34.292651  300705 main.go:141] libmachine: (embed-certs-135920) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:34.292665  300705 main.go:141] libmachine: (embed-certs-135920) DBG | About to run SSH command:
	I0729 13:38:34.292677  300705 main.go:141] libmachine: (embed-certs-135920) DBG | exit 0
	I0729 13:38:34.417738  300705 main.go:141] libmachine: (embed-certs-135920) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:34.418128  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetConfigRaw
	I0729 13:38:34.418881  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.421524  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.421875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.421911  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.422113  300705 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/config.json ...
	I0729 13:38:34.422306  300705 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:34.422325  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:34.422544  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.424658  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.425073  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425167  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.425365  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425575  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425786  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.425935  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.426155  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.426172  300705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:34.529324  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:34.529354  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529600  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:38:34.529625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.532564  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.532966  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.533001  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.533274  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.533502  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533701  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533906  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.534116  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.534339  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.534353  300705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-135920 && echo "embed-certs-135920" | sudo tee /etc/hostname
	I0729 13:38:34.651175  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-135920
	
	I0729 13:38:34.651203  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.653763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.654085  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654266  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.654460  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654647  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654838  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.655024  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.655230  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.655246  300705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-135920' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-135920/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-135920' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:34.769548  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:34.769579  300705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:34.769597  300705 buildroot.go:174] setting up certificates
	I0729 13:38:34.769605  300705 provision.go:84] configureAuth start
	I0729 13:38:34.769613  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.769910  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.772513  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.772833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.772859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.773005  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.775133  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775480  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.775506  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775607  300705 provision.go:143] copyHostCerts
	I0729 13:38:34.775671  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:34.775681  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:34.775738  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:34.775828  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:34.775836  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:34.775855  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:34.775909  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:34.775916  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:34.775932  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:34.775981  300705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.embed-certs-135920 san=[127.0.0.1 192.168.72.207 embed-certs-135920 localhost minikube]
	I0729 13:38:34.901161  300705 provision.go:177] copyRemoteCerts
	I0729 13:38:34.901230  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:34.901258  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.903730  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.904060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904245  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.904428  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.904606  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.904726  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:34.986647  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:35.010406  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:38:35.033884  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:35.057289  300705 provision.go:87] duration metric: took 287.670762ms to configureAuth
	I0729 13:38:35.057318  300705 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:35.057521  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:35.057621  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.060303  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060634  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.060667  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060840  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.061053  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061259  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061433  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.061599  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.061775  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.061792  300705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:35.344890  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:35.344923  300705 machine.go:97] duration metric: took 922.603779ms to provisionDockerMachine
	I0729 13:38:35.344936  300705 start.go:293] postStartSetup for "embed-certs-135920" (driver="kvm2")
	I0729 13:38:35.344947  300705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:35.344964  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.345304  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:35.345341  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.348029  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348420  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.348458  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348612  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.348832  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.348981  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.349112  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.431975  300705 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:35.436416  300705 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:35.436441  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:35.436522  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:35.436621  300705 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:35.436767  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:35.446166  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:35.473466  300705 start.go:296] duration metric: took 128.511199ms for postStartSetup
	I0729 13:38:35.473513  300705 fix.go:56] duration metric: took 18.867373858s for fixHost
	I0729 13:38:35.473540  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.476118  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476477  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.476504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476672  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.476877  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477093  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477241  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.477468  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.477642  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.477652  300705 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:35.577853  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260315.546644144
	
	I0729 13:38:35.577882  300705 fix.go:216] guest clock: 1722260315.546644144
	I0729 13:38:35.577892  300705 fix.go:229] Guest: 2024-07-29 13:38:35.546644144 +0000 UTC Remote: 2024-07-29 13:38:35.473518121 +0000 UTC m=+357.868969453 (delta=73.126023ms)
	I0729 13:38:35.577919  300705 fix.go:200] guest clock delta is within tolerance: 73.126023ms
	I0729 13:38:35.577926  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 18.971820448s
	I0729 13:38:35.577950  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.578260  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:35.581109  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581474  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.581507  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581707  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582287  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582451  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582562  300705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:35.582616  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.582645  300705 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:35.582673  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.585527  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585555  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585989  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586021  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586062  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586084  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586171  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586351  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586360  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586573  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586582  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586795  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586838  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.586942  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.686359  300705 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:35.692726  300705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:35.838487  300705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:35.844313  300705 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:35.844416  300705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:35.861079  300705 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:35.861103  300705 start.go:495] detecting cgroup driver to use...
	I0729 13:38:35.861178  300705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:35.880678  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:35.897996  300705 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:35.898070  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:35.915337  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:35.930990  300705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:36.039923  300705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:36.198255  300705 docker.go:233] disabling docker service ...
	I0729 13:38:36.198340  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:36.213373  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:36.227364  300705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:36.351279  300705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:36.468325  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:36.483692  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:36.503872  300705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:38:36.503945  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.515397  300705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:36.515502  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.527170  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.538668  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.550013  300705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:36.561402  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.573747  300705 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.594158  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.606047  300705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:36.616858  300705 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:36.616961  300705 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:36.633281  300705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:36.644423  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:36.779934  300705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:36.924394  300705 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:36.924483  300705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:36.929889  300705 start.go:563] Will wait 60s for crictl version
	I0729 13:38:36.929935  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:38:36.933671  300705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:36.973428  300705 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:36.973506  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.002245  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.034982  300705 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:38:37.036162  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:37.039092  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:37.039533  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039697  300705 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:37.044028  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:37.057278  300705 kubeadm.go:883] updating cluster {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:37.057398  300705 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:38:37.057504  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:37.096111  300705 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:38:37.096205  300705 ssh_runner.go:195] Run: which lz4
	I0729 13:38:37.100600  300705 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:37.104942  300705 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:37.104974  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:38:35.325849  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:37.326770  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.362042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:36.862022  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.361208  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.862020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.362115  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.861360  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.362077  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.861478  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.361278  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.861920  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.167072  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:40.667067  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:38.548671  300705 crio.go:462] duration metric: took 1.448103052s to copy over tarball
	I0729 13:38:38.548764  300705 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:40.801144  300705 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.252337742s)
	I0729 13:38:40.801177  300705 crio.go:469] duration metric: took 2.252468783s to extract the tarball
	I0729 13:38:40.801185  300705 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:40.840132  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:40.887424  300705 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:38:40.887447  300705 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:38:40.887456  300705 kubeadm.go:934] updating node { 192.168.72.207 8443 v1.30.3 crio true true} ...
	I0729 13:38:40.887583  300705 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-135920 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:40.887661  300705 ssh_runner.go:195] Run: crio config
	I0729 13:38:40.943732  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:40.943759  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:40.943771  300705 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:40.943801  300705 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.207 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-135920 NodeName:embed-certs-135920 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:38:40.943967  300705 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-135920"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:40.944048  300705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:38:40.954284  300705 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:40.954354  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:40.963877  300705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 13:38:40.981828  300705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:40.999273  300705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 13:38:41.016590  300705 ssh_runner.go:195] Run: grep 192.168.72.207	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:41.020149  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:41.031970  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:41.163779  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:41.181723  300705 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920 for IP: 192.168.72.207
	I0729 13:38:41.181746  300705 certs.go:194] generating shared ca certs ...
	I0729 13:38:41.181764  300705 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:41.181989  300705 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:41.182053  300705 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:41.182067  300705 certs.go:256] generating profile certs ...
	I0729 13:38:41.182191  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/client.key
	I0729 13:38:41.182257  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key.45ab1b35
	I0729 13:38:41.182306  300705 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key
	I0729 13:38:41.182454  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:41.182501  300705 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:41.182517  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:41.182553  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:41.182583  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:41.182607  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:41.182647  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:41.183522  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:41.239170  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:41.278086  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:41.318584  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:41.351639  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 13:38:41.389242  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:38:41.414897  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:41.439178  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:38:41.464278  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:41.488391  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:41.515271  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:41.539904  300705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:41.557036  300705 ssh_runner.go:195] Run: openssl version
	I0729 13:38:41.562935  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:41.580782  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585603  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585670  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.591504  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:41.602129  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:41.612441  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616813  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616866  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.622328  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:41.633108  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:41.643897  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648369  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648415  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.654085  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:41.665037  300705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:41.670067  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:41.676340  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:41.682386  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:41.688809  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:41.694957  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:41.700469  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:41.706471  300705 kubeadm.go:392] StartCluster: {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:41.706561  300705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:41.706617  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.746623  300705 cri.go:89] found id: ""
	I0729 13:38:41.746703  300705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:41.757101  300705 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:41.757121  300705 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:41.757174  300705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:41.766817  300705 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:41.767837  300705 kubeconfig.go:125] found "embed-certs-135920" server: "https://192.168.72.207:8443"
	I0729 13:38:41.770191  300705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:41.779930  300705 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.207
	I0729 13:38:41.779961  300705 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:41.779976  300705 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:41.780030  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.816273  300705 cri.go:89] found id: ""
	I0729 13:38:41.816350  300705 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:41.836512  300705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:41.847230  300705 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:41.847249  300705 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:41.847297  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:41.856215  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:41.856262  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:41.866646  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:41.876656  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:41.876723  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:41.886810  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.895693  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:41.895755  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.904774  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:41.915232  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:41.915301  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:41.924961  300705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:41.937051  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:42.059359  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:39.329415  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.826891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.361613  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:41.861155  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.361524  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.862047  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.361778  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.862055  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.861737  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.361194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.862019  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.326814  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:45.666203  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:42.934386  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.142119  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.221754  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.346345  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:43.346451  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.847275  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.347551  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.391680  300705 api_server.go:72] duration metric: took 1.045336573s to wait for apiserver process to appear ...
	I0729 13:38:44.391709  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:44.391735  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:44.392354  300705 api_server.go:269] stopped: https://192.168.72.207:8443/healthz: Get "https://192.168.72.207:8443/healthz": dial tcp 192.168.72.207:8443: connect: connection refused
	I0729 13:38:44.892773  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.149059  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.149101  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.149128  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.161645  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.161672  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.391878  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.396499  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.396527  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:47.892015  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.897406  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.897436  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:48.391867  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:48.395941  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:38:48.401926  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:48.401951  300705 api_server.go:131] duration metric: took 4.010234721s to wait for apiserver health ...
	I0729 13:38:48.401962  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:48.401970  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:48.403912  300705 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:44.073092  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:46.327011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:48.405332  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:48.416550  300705 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:48.439881  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:48.452435  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:48.452477  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:48.452527  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:48.452544  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:48.452556  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:48.452575  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:48.452584  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:48.452594  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:48.452604  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:48.452617  300705 system_pods.go:74] duration metric: took 12.710662ms to wait for pod list to return data ...
	I0729 13:38:48.452629  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:48.455453  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:48.455484  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:48.455497  300705 node_conditions.go:105] duration metric: took 2.858433ms to run NodePressure ...
	I0729 13:38:48.455518  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:48.791507  300705 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796191  300705 kubeadm.go:739] kubelet initialised
	I0729 13:38:48.796213  300705 kubeadm.go:740] duration metric: took 4.674843ms waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796222  300705 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:48.802395  300705 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.807224  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807247  300705 pod_ready.go:81] duration metric: took 4.825485ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.807263  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807269  300705 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.812485  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812516  300705 pod_ready.go:81] duration metric: took 5.235923ms for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.812529  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812536  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.817345  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817374  300705 pod_ready.go:81] duration metric: took 4.827847ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.817383  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817390  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.843709  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843754  300705 pod_ready.go:81] duration metric: took 26.35618ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.843775  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843783  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.243226  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243257  300705 pod_ready.go:81] duration metric: took 399.464753ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.243269  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243278  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.643370  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643399  300705 pod_ready.go:81] duration metric: took 400.112533ms for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.643410  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643416  300705 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:50.044089  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044119  300705 pod_ready.go:81] duration metric: took 400.694081ms for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:50.044128  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044135  300705 pod_ready.go:38] duration metric: took 1.247904039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:50.044153  300705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:50.055730  300705 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:50.055755  300705 kubeadm.go:597] duration metric: took 8.298625813s to restartPrimaryControlPlane
	I0729 13:38:50.055765  300705 kubeadm.go:394] duration metric: took 8.349303256s to StartCluster
	I0729 13:38:50.055785  300705 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.055869  300705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:50.057734  300705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.058013  300705 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:50.058092  300705 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:50.058165  300705 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-135920"
	I0729 13:38:50.058216  300705 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-135920"
	W0729 13:38:50.058230  300705 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:50.058217  300705 addons.go:69] Setting default-storageclass=true in profile "embed-certs-135920"
	I0729 13:38:50.058244  300705 addons.go:69] Setting metrics-server=true in profile "embed-certs-135920"
	I0729 13:38:50.058268  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058270  300705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-135920"
	I0729 13:38:50.058297  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:50.058305  300705 addons.go:234] Setting addon metrics-server=true in "embed-certs-135920"
	W0729 13:38:50.058350  300705 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:50.058416  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058719  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058746  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058763  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058766  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058732  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058835  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.061029  300705 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:50.062610  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:50.074642  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0729 13:38:50.074661  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0729 13:38:50.075119  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0729 13:38:50.075217  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075310  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075570  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075833  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.075856  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076049  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076066  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076273  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076367  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076393  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076434  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076620  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.076863  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.076912  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.076959  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.077488  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.077519  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.080392  300705 addons.go:234] Setting addon default-storageclass=true in "embed-certs-135920"
	W0729 13:38:50.080419  300705 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:50.080458  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.080872  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.080914  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.093352  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I0729 13:38:50.093981  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.094704  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.094742  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.095201  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.095452  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.095863  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0729 13:38:50.096287  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096506  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0729 13:38:50.096945  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096974  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.096991  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.097343  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.097408  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.097508  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.097529  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.099585  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.099600  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.099936  300705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:50.100730  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.100765  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.101377  300705 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.101399  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:50.101424  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.101563  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.103218  300705 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:46.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:46.862046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.362045  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.361183  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.862026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.361204  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.861490  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.361635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.861519  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.104927  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:50.104948  300705 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:50.104971  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.105309  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106036  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.106207  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106369  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.106615  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.106716  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.106817  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.108316  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.108859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108908  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.109081  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.109240  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.109354  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.119251  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0729 13:38:50.119703  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.120206  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.120235  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.120620  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.120813  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.122685  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.122898  300705 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.122910  300705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:50.122923  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.125412  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.125875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.125914  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.126140  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.126321  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.126448  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.126566  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.254664  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:50.276352  300705 node_ready.go:35] waiting up to 6m0s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:50.328315  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.412968  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.459653  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:50.459697  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:50.513203  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:50.513237  300705 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:50.576439  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.576469  300705 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:50.611994  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.701214  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701569  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.701636  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701647  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701657  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701663  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701909  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701936  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701939  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.707113  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.707130  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.707390  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.707407  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.707407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.625719  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212712139s)
	I0729 13:38:51.625766  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.625778  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626066  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.626109  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626117  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.626135  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.626143  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626412  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626430  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662030  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.049982518s)
	I0729 13:38:51.662094  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662110  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.662391  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.662759  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.662781  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662798  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.663076  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.663117  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.663126  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.663138  300705 addons.go:475] Verifying addon metrics-server=true in "embed-certs-135920"
	I0729 13:38:51.666005  300705 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 13:38:47.666568  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.167349  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.667365  300705 addons.go:510] duration metric: took 1.609276005s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 13:38:52.280219  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.826113  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.826826  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:53.327720  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:51.861510  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.362026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.861182  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.361850  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.861931  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.362035  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.861192  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.361173  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.862018  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.665875  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.666184  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.779805  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:56.780550  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:55.826349  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:58.326186  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:56.361740  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:56.862033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.362084  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.861406  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.861194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.361788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.861962  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.362043  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.862000  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.166551  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:59.167246  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.666773  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:57.780677  300705 node_ready.go:49] node "embed-certs-135920" has status "Ready":"True"
	I0729 13:38:57.780700  300705 node_ready.go:38] duration metric: took 7.504317897s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:57.780709  300705 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:57.786299  300705 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791107  300705 pod_ready.go:92] pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:57.791132  300705 pod_ready.go:81] duration metric: took 4.805712ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791143  300705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:59.806437  300705 pod_ready.go:102] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:00.296725  300705 pod_ready.go:92] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.296772  300705 pod_ready.go:81] duration metric: took 2.505622037s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.296782  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302450  300705 pod_ready.go:92] pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.302471  300705 pod_ready.go:81] duration metric: took 5.680644ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302482  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306734  300705 pod_ready.go:92] pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.306753  300705 pod_ready.go:81] duration metric: took 4.264085ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306762  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311745  300705 pod_ready.go:92] pod "kube-proxy-sn8bc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.311763  300705 pod_ready.go:81] duration metric: took 4.990061ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311773  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817465  300705 pod_ready.go:92] pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:01.817489  300705 pod_ready.go:81] duration metric: took 1.50570948s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817499  300705 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.825911  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.325485  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.362213  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:01.861107  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.361767  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.861151  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.361607  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.862013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.362032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.861858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.361611  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.862037  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.667047  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.166825  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.826817  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.326374  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:05.325891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:07.326167  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.362002  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:06.861635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.361659  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.862061  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.862083  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.361356  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.861763  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.361420  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.861822  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.666165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:10.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:08.824692  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.324207  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:09.326609  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.826082  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.362046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:11.861909  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.861834  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.361461  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.861666  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.861830  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.361141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.862003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.167800  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.665790  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:13.325286  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.826111  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:14.327217  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.826625  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.361731  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:16.862014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.361702  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.862141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.361808  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.361104  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.861123  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.361276  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.861176  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.666780  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.165629  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:18.328096  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.824426  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:19.326628  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.825705  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.362052  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:21.861150  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.361802  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.861996  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.362106  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.861135  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.361998  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.862048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.361848  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.861813  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.666434  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.666549  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:22.824988  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.825210  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.825579  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:23.826380  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:25.826544  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:27.826988  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:26.861651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:26.861733  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:26.904275  301425 cri.go:89] found id: ""
	I0729 13:39:26.904307  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.904315  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:26.904322  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:26.904387  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:26.946925  301425 cri.go:89] found id: ""
	I0729 13:39:26.946954  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.946966  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:26.946973  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:26.947036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:26.979236  301425 cri.go:89] found id: ""
	I0729 13:39:26.979267  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.979276  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:26.979282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:26.979330  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:27.022185  301425 cri.go:89] found id: ""
	I0729 13:39:27.022212  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.022220  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:27.022226  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:27.022277  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:27.055228  301425 cri.go:89] found id: ""
	I0729 13:39:27.055256  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.055266  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:27.055274  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:27.055335  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:27.088885  301425 cri.go:89] found id: ""
	I0729 13:39:27.088918  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.088926  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:27.088933  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:27.088986  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:27.123861  301425 cri.go:89] found id: ""
	I0729 13:39:27.123893  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.123902  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:27.123915  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:27.123967  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:27.157921  301425 cri.go:89] found id: ""
	I0729 13:39:27.157956  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.157964  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:27.157988  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:27.158003  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.222447  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:27.222489  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:27.265646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:27.265680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:27.317344  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:27.317388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:27.333664  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:27.333689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:27.460502  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:29.960703  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:29.974159  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:29.974235  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:30.009701  301425 cri.go:89] found id: ""
	I0729 13:39:30.009740  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.009753  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:30.009761  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:30.009822  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:30.045806  301425 cri.go:89] found id: ""
	I0729 13:39:30.045841  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.045853  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:30.045860  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:30.045924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:30.078709  301425 cri.go:89] found id: ""
	I0729 13:39:30.078738  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.078747  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:30.078753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:30.078808  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:30.112884  301425 cri.go:89] found id: ""
	I0729 13:39:30.112920  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.112932  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:30.112943  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:30.113012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:30.148160  301425 cri.go:89] found id: ""
	I0729 13:39:30.148196  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.148208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:30.148217  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:30.148285  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:30.186939  301425 cri.go:89] found id: ""
	I0729 13:39:30.186967  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.186975  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:30.186981  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:30.187039  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:30.241888  301425 cri.go:89] found id: ""
	I0729 13:39:30.241915  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.241926  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:30.241934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:30.242009  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:30.281482  301425 cri.go:89] found id: ""
	I0729 13:39:30.281510  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.281518  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:30.281527  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:30.281540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:30.321688  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:30.321730  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:30.378464  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:30.378508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:30.394109  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:30.394150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:30.474077  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:30.474101  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:30.474118  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.166322  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.166623  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.666142  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.323534  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.324750  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:30.327219  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:32.826011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.046016  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:33.059705  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:33.059795  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:33.096521  301425 cri.go:89] found id: ""
	I0729 13:39:33.096549  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.096557  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:33.096564  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:33.096621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:33.131262  301425 cri.go:89] found id: ""
	I0729 13:39:33.131295  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.131307  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:33.131314  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:33.131378  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:33.168889  301425 cri.go:89] found id: ""
	I0729 13:39:33.168915  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.168925  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:33.168932  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:33.168994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:33.205513  301425 cri.go:89] found id: ""
	I0729 13:39:33.205547  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.205558  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:33.205567  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:33.205644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:33.247051  301425 cri.go:89] found id: ""
	I0729 13:39:33.247079  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.247087  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:33.247093  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:33.247149  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:33.279541  301425 cri.go:89] found id: ""
	I0729 13:39:33.279575  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.279587  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:33.279596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:33.279659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:33.314000  301425 cri.go:89] found id: ""
	I0729 13:39:33.314034  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.314046  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:33.314054  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:33.314117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:33.351363  301425 cri.go:89] found id: ""
	I0729 13:39:33.351390  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.351401  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:33.351412  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:33.351437  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:33.413509  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:33.413547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:33.428128  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:33.428165  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:33.495430  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:33.495461  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:33.495478  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:33.574060  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:33.574098  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:34.166133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.167919  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.823668  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.824684  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.326216  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826516  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.113561  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:36.126899  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:36.126965  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:36.163363  301425 cri.go:89] found id: ""
	I0729 13:39:36.163396  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.163407  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:36.163414  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:36.163473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:36.205215  301425 cri.go:89] found id: ""
	I0729 13:39:36.205243  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.205259  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:36.205267  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:36.205331  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:36.243166  301425 cri.go:89] found id: ""
	I0729 13:39:36.243220  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.243231  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:36.243239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:36.243295  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:36.280804  301425 cri.go:89] found id: ""
	I0729 13:39:36.280836  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.280845  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:36.280852  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:36.280903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:36.317291  301425 cri.go:89] found id: ""
	I0729 13:39:36.317320  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.317330  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:36.317337  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:36.317399  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:36.358111  301425 cri.go:89] found id: ""
	I0729 13:39:36.358145  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.358156  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:36.358164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:36.358229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:36.399407  301425 cri.go:89] found id: ""
	I0729 13:39:36.399440  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.399451  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:36.399459  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:36.399525  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:36.437876  301425 cri.go:89] found id: ""
	I0729 13:39:36.437904  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.437914  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:36.437926  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:36.437942  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:36.514464  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:36.514493  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:36.514511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:36.592036  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:36.592083  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:36.647650  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:36.647691  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:36.706890  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:36.706935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.226070  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:39.239313  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:39.239373  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:39.274158  301425 cri.go:89] found id: ""
	I0729 13:39:39.274191  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.274202  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:39.274210  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:39.274286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:39.308448  301425 cri.go:89] found id: ""
	I0729 13:39:39.308484  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.308492  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:39.308499  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:39.308563  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:39.347745  301425 cri.go:89] found id: ""
	I0729 13:39:39.347782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.347791  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:39.347798  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:39.347856  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:39.380649  301425 cri.go:89] found id: ""
	I0729 13:39:39.380679  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.380688  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:39.380696  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:39.380767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:39.415076  301425 cri.go:89] found id: ""
	I0729 13:39:39.415107  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.415115  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:39.415120  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:39.415170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:39.450749  301425 cri.go:89] found id: ""
	I0729 13:39:39.450782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.450793  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:39.450801  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:39.450864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:39.482148  301425 cri.go:89] found id: ""
	I0729 13:39:39.482175  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.482184  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:39.482190  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:39.482239  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:39.518558  301425 cri.go:89] found id: ""
	I0729 13:39:39.518588  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.518597  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:39.518608  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:39.518622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:39.555753  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:39.555786  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:39.606627  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:39.606661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.620359  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:39.620388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:39.690685  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:39.690711  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:39.690728  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:38.665446  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.666445  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826801  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.325166  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:39.827390  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.326038  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.271925  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:42.284365  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:42.284447  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:42.318966  301425 cri.go:89] found id: ""
	I0729 13:39:42.318998  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.319020  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:42.319028  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:42.319111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:42.354811  301425 cri.go:89] found id: ""
	I0729 13:39:42.354840  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.354854  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:42.354862  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:42.354917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:42.402524  301425 cri.go:89] found id: ""
	I0729 13:39:42.402557  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.402569  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:42.402577  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:42.402643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:42.460954  301425 cri.go:89] found id: ""
	I0729 13:39:42.460984  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.461001  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:42.461010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:42.461063  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:42.516849  301425 cri.go:89] found id: ""
	I0729 13:39:42.516880  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.516890  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:42.516898  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:42.516963  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:42.560289  301425 cri.go:89] found id: ""
	I0729 13:39:42.560316  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.560325  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:42.560332  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:42.560397  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:42.597798  301425 cri.go:89] found id: ""
	I0729 13:39:42.597829  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.597839  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:42.597847  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:42.597912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:42.633015  301425 cri.go:89] found id: ""
	I0729 13:39:42.633043  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.633059  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:42.633068  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:42.633080  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:42.711103  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:42.711126  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:42.711141  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:42.787459  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:42.787499  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:42.828965  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:42.829002  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:42.881702  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:42.881740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:45.396462  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:45.410766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:45.410859  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:45.445886  301425 cri.go:89] found id: ""
	I0729 13:39:45.445931  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.445943  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:45.445960  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:45.446023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:45.484293  301425 cri.go:89] found id: ""
	I0729 13:39:45.484326  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.484338  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:45.484346  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:45.484410  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:45.520209  301425 cri.go:89] found id: ""
	I0729 13:39:45.520237  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.520246  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:45.520252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:45.520300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:45.555671  301425 cri.go:89] found id: ""
	I0729 13:39:45.555702  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.555711  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:45.555717  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:45.555767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:45.594578  301425 cri.go:89] found id: ""
	I0729 13:39:45.594609  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.594618  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:45.594624  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:45.594685  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:45.631777  301425 cri.go:89] found id: ""
	I0729 13:39:45.631805  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.631817  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:45.631825  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:45.631881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:45.667163  301425 cri.go:89] found id: ""
	I0729 13:39:45.667189  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.667197  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:45.667203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:45.667258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:45.703393  301425 cri.go:89] found id: ""
	I0729 13:39:45.703434  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.703443  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:45.703454  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:45.703488  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:45.774424  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:45.774452  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:45.774472  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:45.857529  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:45.857586  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:45.899737  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:45.899775  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:45.952640  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:45.952685  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:42.666728  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.165982  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.825543  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.323544  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:47.323595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:44.825237  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:46.825276  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.467705  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:48.482292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:48.482380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:48.520146  301425 cri.go:89] found id: ""
	I0729 13:39:48.520181  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.520195  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:48.520204  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:48.520282  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:48.552623  301425 cri.go:89] found id: ""
	I0729 13:39:48.552654  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.552665  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:48.552672  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:48.552734  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:48.587254  301425 cri.go:89] found id: ""
	I0729 13:39:48.587290  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.587303  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:48.587309  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:48.587368  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:48.621045  301425 cri.go:89] found id: ""
	I0729 13:39:48.621076  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.621088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:48.621096  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:48.621160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:48.654117  301425 cri.go:89] found id: ""
	I0729 13:39:48.654151  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.654163  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:48.654171  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:48.654236  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:48.693108  301425 cri.go:89] found id: ""
	I0729 13:39:48.693149  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.693166  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:48.693173  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:48.693225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:48.733000  301425 cri.go:89] found id: ""
	I0729 13:39:48.733025  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.733033  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:48.733039  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:48.733088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:48.773761  301425 cri.go:89] found id: ""
	I0729 13:39:48.773789  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.773798  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:48.773807  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:48.773822  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:48.826655  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:48.826683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:48.840335  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:48.840364  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:48.913727  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:48.913754  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:48.913774  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:48.990196  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:48.990235  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:47.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.167105  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.667165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.324027  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.324146  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.825859  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.326299  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.533333  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:51.547115  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:51.547175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:51.583247  301425 cri.go:89] found id: ""
	I0729 13:39:51.583284  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.583292  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:51.583297  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:51.583350  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:51.618925  301425 cri.go:89] found id: ""
	I0729 13:39:51.618958  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.618969  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:51.618977  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:51.619036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:51.657099  301425 cri.go:89] found id: ""
	I0729 13:39:51.657132  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.657144  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:51.657151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:51.657210  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:51.695413  301425 cri.go:89] found id: ""
	I0729 13:39:51.695459  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.695471  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:51.695480  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:51.695553  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:51.731153  301425 cri.go:89] found id: ""
	I0729 13:39:51.731186  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.731198  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:51.731206  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:51.731271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:51.765662  301425 cri.go:89] found id: ""
	I0729 13:39:51.765716  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.765730  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:51.765740  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:51.765807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:51.800442  301425 cri.go:89] found id: ""
	I0729 13:39:51.800480  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.800491  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:51.800500  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:51.800562  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:51.844516  301425 cri.go:89] found id: ""
	I0729 13:39:51.844542  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.844551  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:51.844562  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:51.844580  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:51.896139  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:51.896176  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:51.910479  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:51.910511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:51.980025  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:51.980052  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:51.980071  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:52.054674  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:52.054717  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.596468  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:54.612233  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:54.612344  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:54.653506  301425 cri.go:89] found id: ""
	I0729 13:39:54.653547  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.653558  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:54.653565  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:54.653624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:54.696964  301425 cri.go:89] found id: ""
	I0729 13:39:54.697002  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.697015  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:54.697023  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:54.697088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:54.731165  301425 cri.go:89] found id: ""
	I0729 13:39:54.731196  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.731207  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:54.731214  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:54.731279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:54.774397  301425 cri.go:89] found id: ""
	I0729 13:39:54.774426  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.774437  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:54.774444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:54.774506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:54.813365  301425 cri.go:89] found id: ""
	I0729 13:39:54.813396  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.813408  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:54.813414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:54.813480  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:54.849936  301425 cri.go:89] found id: ""
	I0729 13:39:54.849962  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.849970  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:54.849980  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:54.850042  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:54.883979  301425 cri.go:89] found id: ""
	I0729 13:39:54.884007  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.884015  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:54.884021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:54.884087  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:54.919754  301425 cri.go:89] found id: ""
	I0729 13:39:54.919779  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.919787  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:54.919796  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:54.919817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:54.973082  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:54.973117  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:54.986534  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:54.986571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:55.055473  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:55.055499  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:55.055514  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:55.138278  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:55.138322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.166585  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:56.166714  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.824525  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.824559  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.825238  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.826464  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.826664  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.683818  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:57.698992  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:57.699070  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:57.742071  301425 cri.go:89] found id: ""
	I0729 13:39:57.742103  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.742113  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:57.742121  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:57.742185  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:57.777871  301425 cri.go:89] found id: ""
	I0729 13:39:57.777902  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.777911  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:57.777918  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:57.777975  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:57.817767  301425 cri.go:89] found id: ""
	I0729 13:39:57.817798  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.817809  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:57.817817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:57.817889  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:57.855608  301425 cri.go:89] found id: ""
	I0729 13:39:57.855634  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.855644  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:57.855651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:57.855714  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:57.891219  301425 cri.go:89] found id: ""
	I0729 13:39:57.891248  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.891258  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:57.891266  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:57.891336  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:57.926000  301425 cri.go:89] found id: ""
	I0729 13:39:57.926034  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.926045  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:57.926053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:57.926116  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:57.964935  301425 cri.go:89] found id: ""
	I0729 13:39:57.964962  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.964978  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:57.964985  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:57.965051  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:58.001363  301425 cri.go:89] found id: ""
	I0729 13:39:58.001393  301425 logs.go:276] 0 containers: []
	W0729 13:39:58.001405  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:58.001417  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:58.001434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:58.057551  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:58.057598  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:58.072162  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:58.072200  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:58.140533  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:58.140565  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:58.140582  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:58.227285  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:58.227330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:00.769075  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:00.783394  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:00.783471  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:00.831260  301425 cri.go:89] found id: ""
	I0729 13:40:00.831291  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.831301  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:00.831309  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:00.831370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:00.870017  301425 cri.go:89] found id: ""
	I0729 13:40:00.870045  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.870057  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:00.870065  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:00.870127  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:00.904691  301425 cri.go:89] found id: ""
	I0729 13:40:00.904728  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.904740  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:00.904748  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:00.904828  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:00.937221  301425 cri.go:89] found id: ""
	I0729 13:40:00.937249  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.937259  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:00.937265  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:00.937329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:58.167355  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.666837  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.824755  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.324616  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.325368  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.325689  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.326062  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.977961  301425 cri.go:89] found id: ""
	I0729 13:40:00.977991  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.978002  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:00.978010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:00.978104  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:01.014239  301425 cri.go:89] found id: ""
	I0729 13:40:01.014271  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.014283  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:01.014292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:01.014362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:01.050583  301425 cri.go:89] found id: ""
	I0729 13:40:01.050615  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.050630  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:01.050637  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:01.050696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:01.091599  301425 cri.go:89] found id: ""
	I0729 13:40:01.091624  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.091634  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:01.091643  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:01.091661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:01.146404  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:01.146445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:01.160327  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:01.160358  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:01.237120  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:01.237147  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:01.237162  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:01.321539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:01.321590  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:03.865268  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:03.879648  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:03.879724  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:03.915303  301425 cri.go:89] found id: ""
	I0729 13:40:03.915329  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.915338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:03.915344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:03.915403  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:03.951982  301425 cri.go:89] found id: ""
	I0729 13:40:03.952014  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.952023  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:03.952032  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:03.952099  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:03.989751  301425 cri.go:89] found id: ""
	I0729 13:40:03.989785  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.989796  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:03.989804  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:03.989870  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:04.026934  301425 cri.go:89] found id: ""
	I0729 13:40:04.026975  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.026988  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:04.026996  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:04.027059  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:04.064135  301425 cri.go:89] found id: ""
	I0729 13:40:04.064165  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.064175  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:04.064187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:04.064256  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:04.103080  301425 cri.go:89] found id: ""
	I0729 13:40:04.103108  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.103117  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:04.103123  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:04.103172  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:04.143370  301425 cri.go:89] found id: ""
	I0729 13:40:04.143403  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.143414  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:04.143422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:04.143491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:04.179251  301425 cri.go:89] found id: ""
	I0729 13:40:04.179286  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.179298  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:04.179311  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:04.179330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:04.261058  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:04.261089  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:04.261111  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:04.342897  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:04.342935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:04.391504  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:04.391532  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:04.443064  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:04.443106  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:03.166195  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:05.166660  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.824882  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:07.324346  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.326236  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.825685  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.959346  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:06.974377  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:06.974444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:07.007797  301425 cri.go:89] found id: ""
	I0729 13:40:07.007834  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.007847  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:07.007856  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:07.007924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:07.042707  301425 cri.go:89] found id: ""
	I0729 13:40:07.042741  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.042749  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:07.042755  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:07.042807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:07.080150  301425 cri.go:89] found id: ""
	I0729 13:40:07.080185  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.080196  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:07.080203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:07.080268  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:07.115740  301425 cri.go:89] found id: ""
	I0729 13:40:07.115777  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.115788  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:07.115796  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:07.115888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:07.154110  301425 cri.go:89] found id: ""
	I0729 13:40:07.154141  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.154151  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:07.154158  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:07.154225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:07.190819  301425 cri.go:89] found id: ""
	I0729 13:40:07.190850  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.190858  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:07.190865  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:07.190917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:07.231530  301425 cri.go:89] found id: ""
	I0729 13:40:07.231560  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.231571  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:07.231579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:07.231643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:07.272211  301425 cri.go:89] found id: ""
	I0729 13:40:07.272240  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.272247  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:07.272257  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:07.272269  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.326673  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:07.326704  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:07.341255  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:07.341282  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:07.409850  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:07.409878  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:07.409895  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:07.493105  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:07.493169  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.033906  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:10.047938  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:10.048018  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:10.084224  301425 cri.go:89] found id: ""
	I0729 13:40:10.084251  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.084259  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:10.084265  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:10.084316  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:10.120362  301425 cri.go:89] found id: ""
	I0729 13:40:10.120398  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.120409  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:10.120417  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:10.120484  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:10.154128  301425 cri.go:89] found id: ""
	I0729 13:40:10.154160  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.154170  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:10.154178  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:10.154243  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:10.189539  301425 cri.go:89] found id: ""
	I0729 13:40:10.189574  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.189588  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:10.189596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:10.189661  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:10.228821  301425 cri.go:89] found id: ""
	I0729 13:40:10.228855  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.228867  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:10.228875  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:10.228950  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:10.274726  301425 cri.go:89] found id: ""
	I0729 13:40:10.274758  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.274769  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:10.274776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:10.274845  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:10.308910  301425 cri.go:89] found id: ""
	I0729 13:40:10.308945  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.308956  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:10.308964  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:10.309030  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:10.346008  301425 cri.go:89] found id: ""
	I0729 13:40:10.346044  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.346056  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:10.346069  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:10.346091  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:10.360541  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:10.360581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:10.433763  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:10.433788  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:10.433802  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:10.520366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:10.520418  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.561482  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:10.561512  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.668816  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:10.166833  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:09.823429  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.824033  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:08.826798  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.326762  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.327128  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.114858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:13.128348  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:13.128425  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:13.165329  301425 cri.go:89] found id: ""
	I0729 13:40:13.165359  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.165370  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:13.165377  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:13.165441  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:13.200104  301425 cri.go:89] found id: ""
	I0729 13:40:13.200135  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.200148  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:13.200155  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:13.200224  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:13.238632  301425 cri.go:89] found id: ""
	I0729 13:40:13.238680  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.238688  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:13.238694  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:13.238748  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:13.270859  301425 cri.go:89] found id: ""
	I0729 13:40:13.270892  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.270901  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:13.270907  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:13.270976  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:13.308346  301425 cri.go:89] found id: ""
	I0729 13:40:13.308378  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.308386  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:13.308392  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:13.308444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:13.346286  301425 cri.go:89] found id: ""
	I0729 13:40:13.346319  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.346331  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:13.346339  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:13.346412  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:13.383699  301425 cri.go:89] found id: ""
	I0729 13:40:13.383736  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.383769  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:13.383791  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:13.383850  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:13.419958  301425 cri.go:89] found id: ""
	I0729 13:40:13.420045  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.420058  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:13.420071  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:13.420094  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:13.473984  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:13.474028  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:13.488376  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:13.488410  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:13.559515  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:13.559543  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:13.559560  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:13.640528  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:13.640570  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:12.665799  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.666662  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.668217  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.323746  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.323961  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:15.826422  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.326284  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.189581  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:16.203962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:16.204052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:16.240537  301425 cri.go:89] found id: ""
	I0729 13:40:16.240572  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.240583  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:16.240591  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:16.240659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:16.277060  301425 cri.go:89] found id: ""
	I0729 13:40:16.277099  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.277112  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:16.277123  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:16.277200  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:16.313839  301425 cri.go:89] found id: ""
	I0729 13:40:16.313869  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.313878  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:16.313884  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:16.313935  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:16.351806  301425 cri.go:89] found id: ""
	I0729 13:40:16.351840  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.351850  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:16.351858  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:16.351922  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:16.387122  301425 cri.go:89] found id: ""
	I0729 13:40:16.387158  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.387169  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:16.387176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:16.387242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:16.424180  301425 cri.go:89] found id: ""
	I0729 13:40:16.424209  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.424220  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:16.424229  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:16.424292  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:16.461827  301425 cri.go:89] found id: ""
	I0729 13:40:16.461865  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.461879  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:16.461889  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:16.461946  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:16.510198  301425 cri.go:89] found id: ""
	I0729 13:40:16.510230  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.510238  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:16.510248  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:16.510264  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:16.585378  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:16.585420  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:16.629304  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:16.629337  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:16.682386  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:16.682434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:16.698405  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:16.698436  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:16.770281  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.270551  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:19.284543  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:19.284617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:19.325194  301425 cri.go:89] found id: ""
	I0729 13:40:19.325221  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.325231  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:19.325238  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:19.325298  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:19.362007  301425 cri.go:89] found id: ""
	I0729 13:40:19.362038  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.362058  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:19.362066  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:19.362196  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:19.401162  301425 cri.go:89] found id: ""
	I0729 13:40:19.401191  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.401202  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:19.401210  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:19.401274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:19.434652  301425 cri.go:89] found id: ""
	I0729 13:40:19.434689  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.434700  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:19.434709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:19.434774  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:19.470116  301425 cri.go:89] found id: ""
	I0729 13:40:19.470149  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.470157  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:19.470164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:19.470218  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:19.503593  301425 cri.go:89] found id: ""
	I0729 13:40:19.503621  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.503629  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:19.503635  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:19.503696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:19.546127  301425 cri.go:89] found id: ""
	I0729 13:40:19.546155  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.546164  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:19.546169  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:19.546217  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:19.584600  301425 cri.go:89] found id: ""
	I0729 13:40:19.584639  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.584650  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:19.584663  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:19.584681  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:19.599411  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:19.599446  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:19.665811  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.665836  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:19.665853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:19.747295  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:19.747339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:19.790476  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:19.790516  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:18.669004  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.166437  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.824788  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.327093  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:20.825470  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.827651  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.346725  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:22.361349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:22.361443  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:22.394840  301425 cri.go:89] found id: ""
	I0729 13:40:22.394870  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.394881  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:22.394889  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:22.394956  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:22.429328  301425 cri.go:89] found id: ""
	I0729 13:40:22.429356  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.429364  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:22.429370  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:22.429431  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:22.463179  301425 cri.go:89] found id: ""
	I0729 13:40:22.463206  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.463214  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:22.463220  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:22.463291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:22.497527  301425 cri.go:89] found id: ""
	I0729 13:40:22.497557  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.497565  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:22.497571  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:22.497627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:22.537607  301425 cri.go:89] found id: ""
	I0729 13:40:22.537635  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.537646  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:22.537654  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:22.537718  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:22.580658  301425 cri.go:89] found id: ""
	I0729 13:40:22.580689  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.580701  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:22.580709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:22.580775  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:22.622229  301425 cri.go:89] found id: ""
	I0729 13:40:22.622261  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.622270  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:22.622282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:22.622346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:22.660091  301425 cri.go:89] found id: ""
	I0729 13:40:22.660120  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.660129  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:22.660139  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:22.660153  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:22.715053  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:22.715090  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:22.728865  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:22.728898  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:22.805760  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:22.805785  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:22.805799  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:22.890915  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:22.890960  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:25.457272  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:25.471002  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:25.471088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:25.506190  301425 cri.go:89] found id: ""
	I0729 13:40:25.506226  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.506237  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:25.506244  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:25.506297  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:25.540957  301425 cri.go:89] found id: ""
	I0729 13:40:25.540991  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.541002  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:25.541011  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:25.541074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:25.578378  301425 cri.go:89] found id: ""
	I0729 13:40:25.578424  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.578440  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:25.578448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:25.578518  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:25.620930  301425 cri.go:89] found id: ""
	I0729 13:40:25.620962  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.620979  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:25.620987  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:25.621056  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:25.655558  301425 cri.go:89] found id: ""
	I0729 13:40:25.655589  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.655597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:25.655604  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:25.655670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:25.688810  301425 cri.go:89] found id: ""
	I0729 13:40:25.688845  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.688855  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:25.688863  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:25.688930  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:25.724384  301425 cri.go:89] found id: ""
	I0729 13:40:25.724416  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.724428  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:25.724435  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:25.724514  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:25.763174  301425 cri.go:89] found id: ""
	I0729 13:40:25.763200  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.763209  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:25.763219  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:25.763232  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:25.818517  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:25.818569  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:25.833939  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:25.833973  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:25.910487  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:25.910515  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:25.910537  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:23.167028  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.666513  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:23.824183  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.827054  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.325894  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:27.824855  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.993887  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:25.993929  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:28.536843  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:28.550097  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:28.550175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:28.592664  301425 cri.go:89] found id: ""
	I0729 13:40:28.592697  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.592709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:28.592716  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:28.592788  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:28.638299  301425 cri.go:89] found id: ""
	I0729 13:40:28.638329  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.638337  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:28.638343  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:28.638395  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:28.682410  301425 cri.go:89] found id: ""
	I0729 13:40:28.682437  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.682446  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:28.682452  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:28.682511  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:28.719402  301425 cri.go:89] found id: ""
	I0729 13:40:28.719430  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.719438  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:28.719444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:28.719504  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:28.767515  301425 cri.go:89] found id: ""
	I0729 13:40:28.767547  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.767559  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:28.767568  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:28.767633  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:28.811600  301425 cri.go:89] found id: ""
	I0729 13:40:28.811632  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.811644  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:28.811652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:28.811727  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:28.853364  301425 cri.go:89] found id: ""
	I0729 13:40:28.853397  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.853407  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:28.853414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:28.853486  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:28.890981  301425 cri.go:89] found id: ""
	I0729 13:40:28.891013  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.891024  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:28.891035  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:28.891050  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:28.944174  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:28.944213  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:28.957724  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:28.957755  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:29.026457  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:29.026479  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:29.026497  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:29.105366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:29.105415  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:27.667251  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.166789  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:28.323476  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.324242  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:32.325477  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:29.825621  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.828363  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.649374  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:31.663432  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:31.663512  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:31.702047  301425 cri.go:89] found id: ""
	I0729 13:40:31.702080  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.702088  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:31.702098  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:31.702162  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:31.738484  301425 cri.go:89] found id: ""
	I0729 13:40:31.738510  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.738518  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:31.738524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:31.738583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:31.774214  301425 cri.go:89] found id: ""
	I0729 13:40:31.774249  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.774261  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:31.774270  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:31.774339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:31.810263  301425 cri.go:89] found id: ""
	I0729 13:40:31.810293  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.810302  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:31.810307  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:31.810369  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:31.848124  301425 cri.go:89] found id: ""
	I0729 13:40:31.848153  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.848160  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:31.848167  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:31.848234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:31.885531  301425 cri.go:89] found id: ""
	I0729 13:40:31.885561  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.885571  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:31.885580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:31.885650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:31.923904  301425 cri.go:89] found id: ""
	I0729 13:40:31.923939  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.923952  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:31.923959  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:31.924029  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:31.957165  301425 cri.go:89] found id: ""
	I0729 13:40:31.957202  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.957213  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:31.957228  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:31.957248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:32.039221  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:32.039262  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.078191  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:32.078229  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:32.131871  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:32.131922  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:32.146676  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:32.146706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:32.223849  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:34.724927  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:34.739029  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:34.739113  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:34.774627  301425 cri.go:89] found id: ""
	I0729 13:40:34.774660  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.774669  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:34.774675  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:34.774743  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:34.809840  301425 cri.go:89] found id: ""
	I0729 13:40:34.809872  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.809882  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:34.809887  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:34.809940  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:34.847530  301425 cri.go:89] found id: ""
	I0729 13:40:34.847561  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.847572  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:34.847580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:34.847648  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:34.881828  301425 cri.go:89] found id: ""
	I0729 13:40:34.881856  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.881870  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:34.881876  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:34.881937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:34.918903  301425 cri.go:89] found id: ""
	I0729 13:40:34.918937  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.918949  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:34.918956  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:34.919015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:34.954714  301425 cri.go:89] found id: ""
	I0729 13:40:34.954749  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.954761  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:34.954770  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:34.954825  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:34.993433  301425 cri.go:89] found id: ""
	I0729 13:40:34.993463  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.993472  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:34.993478  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:34.993531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:35.033830  301425 cri.go:89] found id: ""
	I0729 13:40:35.033859  301425 logs.go:276] 0 containers: []
	W0729 13:40:35.033874  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:35.033884  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:35.033900  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:35.084546  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:35.084595  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:35.098807  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:35.098845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:35.182636  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:35.182662  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:35.182674  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:35.262767  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:35.262808  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.665817  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.670805  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.823905  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.824232  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.326644  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.825977  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:37.802033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:37.815633  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:37.815697  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:37.857522  301425 cri.go:89] found id: ""
	I0729 13:40:37.857552  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.857563  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:37.857571  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:37.857627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:37.897527  301425 cri.go:89] found id: ""
	I0729 13:40:37.897564  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.897575  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:37.897583  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:37.897649  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.937135  301425 cri.go:89] found id: ""
	I0729 13:40:37.937167  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.937176  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:37.937189  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:37.937255  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:37.972699  301425 cri.go:89] found id: ""
	I0729 13:40:37.972734  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.972751  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:37.972761  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:37.972933  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:38.012702  301425 cri.go:89] found id: ""
	I0729 13:40:38.012732  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.012740  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:38.012747  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:38.012832  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:38.050228  301425 cri.go:89] found id: ""
	I0729 13:40:38.050260  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.050268  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:38.050275  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:38.050329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:38.084665  301425 cri.go:89] found id: ""
	I0729 13:40:38.084693  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.084707  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:38.084715  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:38.084780  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:38.119155  301425 cri.go:89] found id: ""
	I0729 13:40:38.119200  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.119211  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:38.119222  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:38.119236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:38.170934  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:38.170968  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:38.185298  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:38.185329  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:38.256118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:38.256149  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:38.256166  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:38.337090  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:38.337127  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:40.876177  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:40.889580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:40.889655  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:40.922971  301425 cri.go:89] found id: ""
	I0729 13:40:40.923002  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.923010  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:40.923016  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:40.923074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:40.955840  301425 cri.go:89] found id: ""
	I0729 13:40:40.955872  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.955884  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:40.955891  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:40.955952  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.165718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.166160  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.168344  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:38.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.324607  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.324996  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.344232  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:40.993258  301425 cri.go:89] found id: ""
	I0729 13:40:40.993290  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.993298  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:40.993305  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:40.993357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:41.026370  301425 cri.go:89] found id: ""
	I0729 13:40:41.026398  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.026409  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:41.026416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:41.026473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:41.060538  301425 cri.go:89] found id: ""
	I0729 13:40:41.060565  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.060574  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:41.060579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:41.060630  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:41.105074  301425 cri.go:89] found id: ""
	I0729 13:40:41.105108  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.105118  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:41.105126  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:41.105193  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:41.138254  301425 cri.go:89] found id: ""
	I0729 13:40:41.138280  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.138288  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:41.138294  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:41.138342  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:41.171432  301425 cri.go:89] found id: ""
	I0729 13:40:41.171458  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.171466  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:41.171475  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:41.171487  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:41.184703  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:41.184736  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:41.265356  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:41.265392  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:41.265409  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:41.345939  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:41.345979  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:41.388819  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:41.388852  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:43.940388  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:43.955448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:43.955515  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:43.998457  301425 cri.go:89] found id: ""
	I0729 13:40:43.998494  301425 logs.go:276] 0 containers: []
	W0729 13:40:43.998506  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:43.998515  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:43.998584  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:44.038142  301425 cri.go:89] found id: ""
	I0729 13:40:44.038173  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.038185  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:44.038193  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:44.038260  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:44.077270  301425 cri.go:89] found id: ""
	I0729 13:40:44.077302  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.077313  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:44.077321  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:44.077391  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:44.117612  301425 cri.go:89] found id: ""
	I0729 13:40:44.117641  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.117661  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:44.117681  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:44.117749  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:44.152564  301425 cri.go:89] found id: ""
	I0729 13:40:44.152603  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.152615  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:44.152623  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:44.152683  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:44.188245  301425 cri.go:89] found id: ""
	I0729 13:40:44.188276  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.188288  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:44.188296  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:44.188355  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:44.224947  301425 cri.go:89] found id: ""
	I0729 13:40:44.224975  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.224983  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:44.224989  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:44.225037  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:44.264830  301425 cri.go:89] found id: ""
	I0729 13:40:44.264860  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.264867  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:44.264877  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:44.264893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:44.343145  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:44.343182  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:44.384619  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:44.384650  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:44.438195  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:44.438237  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:44.452115  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:44.452152  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:44.526586  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:43.666987  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.167143  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.825141  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.324972  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.827065  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.325488  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:47.027726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:47.041174  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:47.041242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:47.079265  301425 cri.go:89] found id: ""
	I0729 13:40:47.079295  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.079304  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:47.079313  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:47.079380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:47.119775  301425 cri.go:89] found id: ""
	I0729 13:40:47.119807  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.119820  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:47.119828  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:47.119904  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:47.155381  301425 cri.go:89] found id: ""
	I0729 13:40:47.155415  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.155426  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:47.155434  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:47.155490  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:47.195071  301425 cri.go:89] found id: ""
	I0729 13:40:47.195103  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.195111  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:47.195117  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:47.195167  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:47.229487  301425 cri.go:89] found id: ""
	I0729 13:40:47.229519  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.229531  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:47.229539  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:47.229611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:47.266159  301425 cri.go:89] found id: ""
	I0729 13:40:47.266190  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.266201  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:47.266209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:47.266269  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:47.300813  301425 cri.go:89] found id: ""
	I0729 13:40:47.300845  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.300854  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:47.300860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:47.300916  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:47.340378  301425 cri.go:89] found id: ""
	I0729 13:40:47.340412  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.340432  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:47.340444  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:47.340464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:47.395403  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:47.395444  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:47.409505  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:47.409539  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:47.481327  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:47.481349  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:47.481365  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:47.560129  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:47.560172  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.105832  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:50.121192  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:50.121264  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:50.160217  301425 cri.go:89] found id: ""
	I0729 13:40:50.160247  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.160256  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:50.160262  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:50.160313  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:50.199952  301425 cri.go:89] found id: ""
	I0729 13:40:50.199986  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.199998  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:50.200005  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:50.200065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:50.240036  301425 cri.go:89] found id: ""
	I0729 13:40:50.240069  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.240076  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:50.240083  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:50.240134  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:50.279761  301425 cri.go:89] found id: ""
	I0729 13:40:50.279788  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.279796  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:50.279802  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:50.279852  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:50.320324  301425 cri.go:89] found id: ""
	I0729 13:40:50.320350  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.320358  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:50.320364  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:50.320423  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:50.356385  301425 cri.go:89] found id: ""
	I0729 13:40:50.356413  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.356421  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:50.356427  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:50.356482  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:50.396866  301425 cri.go:89] found id: ""
	I0729 13:40:50.396900  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.396912  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:50.396919  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:50.397008  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:50.434778  301425 cri.go:89] found id: ""
	I0729 13:40:50.434812  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.434823  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:50.434836  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:50.434853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:50.447746  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:50.447776  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:50.523750  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:50.523772  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:50.523787  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:50.604206  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:50.604255  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.647414  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:50.647449  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:48.666463  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.666670  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.823595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.824045  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.826836  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:51.326943  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.327715  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.201653  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:53.215745  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:53.215814  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:53.250482  301425 cri.go:89] found id: ""
	I0729 13:40:53.250508  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.250516  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:53.250522  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:53.250583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:53.285956  301425 cri.go:89] found id: ""
	I0729 13:40:53.285988  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.285996  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:53.286002  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:53.286055  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:53.320248  301425 cri.go:89] found id: ""
	I0729 13:40:53.320281  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.320292  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:53.320300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:53.320364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:53.355155  301425 cri.go:89] found id: ""
	I0729 13:40:53.355188  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.355200  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:53.355209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:53.355271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:53.389519  301425 cri.go:89] found id: ""
	I0729 13:40:53.389549  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.389557  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:53.389564  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:53.389620  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:53.424391  301425 cri.go:89] found id: ""
	I0729 13:40:53.424419  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.424427  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:53.424433  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:53.424492  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:53.463297  301425 cri.go:89] found id: ""
	I0729 13:40:53.463331  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.463342  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:53.463350  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:53.463433  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:53.497565  301425 cri.go:89] found id: ""
	I0729 13:40:53.497593  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.497601  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:53.497610  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:53.497622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:53.548906  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:53.548948  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:53.562789  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:53.562823  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:53.635656  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:53.635679  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:53.635693  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:53.715973  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:53.716024  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:53.166007  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.166420  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.324486  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.824480  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.825127  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.326505  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:56.258726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:56.273826  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:56.273905  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:56.310881  301425 cri.go:89] found id: ""
	I0729 13:40:56.310927  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.310936  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:56.310944  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:56.310999  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:56.350104  301425 cri.go:89] found id: ""
	I0729 13:40:56.350139  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.350151  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:56.350158  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:56.350221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:56.385100  301425 cri.go:89] found id: ""
	I0729 13:40:56.385136  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.385145  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:56.385151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:56.385234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:56.421904  301425 cri.go:89] found id: ""
	I0729 13:40:56.421941  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.421953  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:56.421961  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:56.422025  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:56.457366  301425 cri.go:89] found id: ""
	I0729 13:40:56.457403  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.457414  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:56.457422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:56.457491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:56.496700  301425 cri.go:89] found id: ""
	I0729 13:40:56.496732  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.496746  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:56.496755  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:56.496844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:56.532011  301425 cri.go:89] found id: ""
	I0729 13:40:56.532039  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.532047  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:56.532053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:56.532102  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:56.567511  301425 cri.go:89] found id: ""
	I0729 13:40:56.567543  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.567554  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:56.567566  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:56.567581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:56.615875  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:56.615914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:56.629818  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:56.629862  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:56.703255  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:56.703284  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:56.703298  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:56.786466  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:56.786508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:59.328670  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:59.342993  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:59.343061  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:59.378267  301425 cri.go:89] found id: ""
	I0729 13:40:59.378301  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.378313  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:59.378321  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:59.378392  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:59.415637  301425 cri.go:89] found id: ""
	I0729 13:40:59.415669  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.415680  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:59.415687  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:59.415759  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:59.451170  301425 cri.go:89] found id: ""
	I0729 13:40:59.451204  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.451212  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:59.451219  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:59.451275  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:59.485914  301425 cri.go:89] found id: ""
	I0729 13:40:59.485948  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.485960  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:59.485975  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:59.486052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:59.523168  301425 cri.go:89] found id: ""
	I0729 13:40:59.523198  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.523208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:59.523216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:59.523274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:59.557711  301425 cri.go:89] found id: ""
	I0729 13:40:59.557746  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.557758  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:59.557766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:59.557826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:59.593387  301425 cri.go:89] found id: ""
	I0729 13:40:59.593421  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.593434  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:59.593442  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:59.593506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:59.627521  301425 cri.go:89] found id: ""
	I0729 13:40:59.627555  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.627566  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:59.627578  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:59.627597  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:59.677497  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:59.677538  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:59.692116  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:59.692150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:59.759344  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:59.759369  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:59.759382  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:59.840380  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:59.840423  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:57.166964  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:59.666395  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:01.667229  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.323708  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.323995  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.325049  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.328293  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.826414  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.380718  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:02.394436  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:02.394497  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:02.433283  301425 cri.go:89] found id: ""
	I0729 13:41:02.433313  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.433323  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:02.433332  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:02.433393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:02.467206  301425 cri.go:89] found id: ""
	I0729 13:41:02.467232  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.467241  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:02.467247  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:02.467300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:02.502743  301425 cri.go:89] found id: ""
	I0729 13:41:02.502774  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.502783  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:02.502790  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:02.502844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:02.536415  301425 cri.go:89] found id: ""
	I0729 13:41:02.536449  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.536462  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:02.536470  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:02.536527  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:02.570572  301425 cri.go:89] found id: ""
	I0729 13:41:02.570610  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.570621  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:02.570629  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:02.570702  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:02.606251  301425 cri.go:89] found id: ""
	I0729 13:41:02.606277  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.606285  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:02.606292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:02.606345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:02.644637  301425 cri.go:89] found id: ""
	I0729 13:41:02.644664  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.644675  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:02.644683  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:02.644750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:02.679493  301425 cri.go:89] found id: ""
	I0729 13:41:02.679519  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.679527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:02.679537  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:02.679553  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:02.734865  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:02.734896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:02.787929  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:02.787962  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:02.801317  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:02.801344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:02.867838  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:02.867862  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:02.867877  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:05.451323  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:05.465262  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:05.465338  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:05.499797  301425 cri.go:89] found id: ""
	I0729 13:41:05.499827  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.499837  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:05.499845  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:05.499912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:05.534363  301425 cri.go:89] found id: ""
	I0729 13:41:05.534403  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.534416  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:05.534424  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:05.534483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:05.571366  301425 cri.go:89] found id: ""
	I0729 13:41:05.571397  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.571408  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:05.571416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:05.571481  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:05.611301  301425 cri.go:89] found id: ""
	I0729 13:41:05.611335  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.611346  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:05.611355  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:05.611422  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:05.650698  301425 cri.go:89] found id: ""
	I0729 13:41:05.650738  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.650750  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:05.650758  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:05.650823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:05.686166  301425 cri.go:89] found id: ""
	I0729 13:41:05.686204  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.686216  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:05.686225  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:05.686279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:05.724567  301425 cri.go:89] found id: ""
	I0729 13:41:05.724604  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.724616  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:05.724628  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:05.724691  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:05.760401  301425 cri.go:89] found id: ""
	I0729 13:41:05.760430  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.760438  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:05.760448  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:05.760464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:05.811654  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:05.811698  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:05.827189  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:05.827226  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:05.899612  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:05.899636  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:05.899654  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:04.168533  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.665694  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:04.325443  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.824244  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.325499  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:07.326413  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.982384  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:05.982425  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.527609  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:08.542024  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:08.542086  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:08.576313  301425 cri.go:89] found id: ""
	I0729 13:41:08.576340  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.576348  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:08.576354  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:08.576406  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:08.609996  301425 cri.go:89] found id: ""
	I0729 13:41:08.610027  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.610038  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:08.610045  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:08.610111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:08.643722  301425 cri.go:89] found id: ""
	I0729 13:41:08.643750  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.643758  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:08.643765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:08.643815  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:08.679331  301425 cri.go:89] found id: ""
	I0729 13:41:08.679367  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.679378  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:08.679388  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:08.679459  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:08.718348  301425 cri.go:89] found id: ""
	I0729 13:41:08.718376  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.718384  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:08.718390  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:08.718444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:08.758086  301425 cri.go:89] found id: ""
	I0729 13:41:08.758128  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.758140  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:08.758150  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:08.758225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:08.794304  301425 cri.go:89] found id: ""
	I0729 13:41:08.794333  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.794345  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:08.794354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:08.794415  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:08.835448  301425 cri.go:89] found id: ""
	I0729 13:41:08.835477  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.835486  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:08.835495  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:08.835508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:08.923886  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:08.923931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.963921  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:08.963957  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:09.013852  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:09.013893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:09.027838  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:09.027872  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:09.097864  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:08.669271  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.165979  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:08.824724  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:10.825582  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:09.327071  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.826906  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.598762  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:11.612789  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:11.612903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:11.650029  301425 cri.go:89] found id: ""
	I0729 13:41:11.650063  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.650074  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:11.650084  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:11.650152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:11.687479  301425 cri.go:89] found id: ""
	I0729 13:41:11.687510  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.687520  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:11.687527  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:11.687593  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:11.723788  301425 cri.go:89] found id: ""
	I0729 13:41:11.723816  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.723824  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:11.723830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:11.723878  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:11.760304  301425 cri.go:89] found id: ""
	I0729 13:41:11.760341  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.760353  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:11.760361  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:11.760429  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:11.794175  301425 cri.go:89] found id: ""
	I0729 13:41:11.794202  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.794210  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:11.794216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:11.794276  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:11.830653  301425 cri.go:89] found id: ""
	I0729 13:41:11.830679  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.830689  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:11.830697  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:11.830755  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:11.869360  301425 cri.go:89] found id: ""
	I0729 13:41:11.869391  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.869403  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:11.869410  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:11.869473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:11.904164  301425 cri.go:89] found id: ""
	I0729 13:41:11.904195  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.904206  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:11.904218  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:11.904236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:11.979031  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:11.979054  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:11.979069  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:12.064215  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:12.064254  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:12.101854  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:12.101896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:12.152327  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:12.152362  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:14.668032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:14.683118  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:14.683182  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:14.722574  301425 cri.go:89] found id: ""
	I0729 13:41:14.722602  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.722612  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:14.722619  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:14.722686  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:14.759047  301425 cri.go:89] found id: ""
	I0729 13:41:14.759084  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.759094  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:14.759099  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:14.759156  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:14.794363  301425 cri.go:89] found id: ""
	I0729 13:41:14.794400  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.794411  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:14.794418  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:14.794488  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:14.831542  301425 cri.go:89] found id: ""
	I0729 13:41:14.831579  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.831586  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:14.831592  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:14.831650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:14.878710  301425 cri.go:89] found id: ""
	I0729 13:41:14.878745  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.878758  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:14.878765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:14.878824  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:14.937804  301425 cri.go:89] found id: ""
	I0729 13:41:14.937837  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.937847  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:14.937856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:14.937923  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:14.985616  301425 cri.go:89] found id: ""
	I0729 13:41:14.985649  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.985658  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:14.985665  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:14.985737  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:15.023210  301425 cri.go:89] found id: ""
	I0729 13:41:15.023248  301425 logs.go:276] 0 containers: []
	W0729 13:41:15.023261  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:15.023273  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:15.023288  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:15.072549  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:15.072587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:15.086624  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:15.086653  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:15.155391  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:15.155412  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:15.155426  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:15.237480  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:15.237535  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:13.666473  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.666831  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:13.324177  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.324419  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:14.326023  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:16.826314  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.779568  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:17.794163  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:17.794225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:17.831416  301425 cri.go:89] found id: ""
	I0729 13:41:17.831446  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.831456  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:17.831463  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:17.831519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:17.868713  301425 cri.go:89] found id: ""
	I0729 13:41:17.868740  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.868752  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:17.868758  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:17.868834  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:17.913159  301425 cri.go:89] found id: ""
	I0729 13:41:17.913200  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.913211  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:17.913221  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:17.913291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:17.947528  301425 cri.go:89] found id: ""
	I0729 13:41:17.947559  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.947567  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:17.947573  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:17.947693  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:17.982280  301425 cri.go:89] found id: ""
	I0729 13:41:17.982314  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.982323  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:17.982330  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:17.982407  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:18.023729  301425 cri.go:89] found id: ""
	I0729 13:41:18.023767  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.023776  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:18.023783  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:18.023847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:18.061594  301425 cri.go:89] found id: ""
	I0729 13:41:18.061629  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.061637  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:18.061642  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:18.061694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:18.095705  301425 cri.go:89] found id: ""
	I0729 13:41:18.095735  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.095745  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:18.095758  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:18.095778  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:18.175843  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:18.175879  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:18.222979  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:18.223015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:18.277265  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:18.277308  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:18.291002  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:18.291037  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:18.373425  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:20.873958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:20.888091  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:20.888153  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:20.925850  301425 cri.go:89] found id: ""
	I0729 13:41:20.925886  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.925894  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:20.925901  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:20.925955  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:20.962725  301425 cri.go:89] found id: ""
	I0729 13:41:20.962762  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.962774  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:20.962782  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:20.962847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:18.166668  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.166993  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.827065  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.325697  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:19.325369  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:21.326574  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.998741  301425 cri.go:89] found id: ""
	I0729 13:41:20.998778  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.998787  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:20.998794  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:20.998842  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:21.036370  301425 cri.go:89] found id: ""
	I0729 13:41:21.036401  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.036410  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:21.036417  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:21.036483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:21.071560  301425 cri.go:89] found id: ""
	I0729 13:41:21.071588  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.071597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:21.071605  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:21.071670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:21.106778  301425 cri.go:89] found id: ""
	I0729 13:41:21.106810  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.106822  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:21.106830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:21.106890  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:21.139901  301425 cri.go:89] found id: ""
	I0729 13:41:21.139926  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.139934  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:21.139940  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:21.140001  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:21.173281  301425 cri.go:89] found id: ""
	I0729 13:41:21.173312  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.173320  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:21.173330  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:21.173344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:21.225055  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:21.225095  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:21.239780  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:21.239864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:21.313460  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:21.313486  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:21.313504  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:21.398557  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:21.398599  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:23.937873  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:23.951595  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:23.951653  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:23.987177  301425 cri.go:89] found id: ""
	I0729 13:41:23.987208  301425 logs.go:276] 0 containers: []
	W0729 13:41:23.987217  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:23.987225  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:23.987324  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:24.030197  301425 cri.go:89] found id: ""
	I0729 13:41:24.030251  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.030264  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:24.030272  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:24.030339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:24.068031  301425 cri.go:89] found id: ""
	I0729 13:41:24.068061  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.068074  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:24.068081  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:24.068154  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:24.107192  301425 cri.go:89] found id: ""
	I0729 13:41:24.107221  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.107232  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:24.107239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:24.107304  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:24.143154  301425 cri.go:89] found id: ""
	I0729 13:41:24.143182  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.143190  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:24.143196  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:24.143248  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:24.181268  301425 cri.go:89] found id: ""
	I0729 13:41:24.181296  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.181304  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:24.181311  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:24.181370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:24.215248  301425 cri.go:89] found id: ""
	I0729 13:41:24.215284  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.215293  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:24.215299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:24.215363  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:24.250796  301425 cri.go:89] found id: ""
	I0729 13:41:24.250822  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.250831  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:24.250841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:24.250853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:24.305841  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:24.305883  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:24.320182  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:24.320214  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:24.389667  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:24.389690  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:24.389707  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:24.471435  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:24.471479  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:22.665718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.166432  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:22.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:24.826598  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:26.828504  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:23.825754  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.834253  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:28.329733  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:27.014508  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:27.029318  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:27.029382  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:27.064115  301425 cri.go:89] found id: ""
	I0729 13:41:27.064150  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.064161  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:27.064169  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:27.064250  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:27.099081  301425 cri.go:89] found id: ""
	I0729 13:41:27.099110  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.099123  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:27.099131  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:27.099197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:27.132475  301425 cri.go:89] found id: ""
	I0729 13:41:27.132506  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.132518  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:27.132527  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:27.132595  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:27.168924  301425 cri.go:89] found id: ""
	I0729 13:41:27.168948  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.168956  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:27.168962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:27.169015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:27.204052  301425 cri.go:89] found id: ""
	I0729 13:41:27.204082  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.204094  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:27.204109  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:27.204170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:27.238355  301425 cri.go:89] found id: ""
	I0729 13:41:27.238383  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.238391  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:27.238397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:27.238496  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:27.276104  301425 cri.go:89] found id: ""
	I0729 13:41:27.276139  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.276150  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:27.276157  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:27.276222  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:27.308612  301425 cri.go:89] found id: ""
	I0729 13:41:27.308643  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.308654  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:27.308667  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:27.308683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:27.362472  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:27.362511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:27.376349  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:27.376383  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:27.458450  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:27.458472  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:27.458486  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:27.536405  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:27.536445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:30.076285  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:30.091308  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:30.091386  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:30.138335  301425 cri.go:89] found id: ""
	I0729 13:41:30.138369  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.138381  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:30.138389  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:30.138454  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:30.176395  301425 cri.go:89] found id: ""
	I0729 13:41:30.176425  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.176435  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:30.176443  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:30.176495  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:30.214990  301425 cri.go:89] found id: ""
	I0729 13:41:30.215027  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.215035  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:30.215041  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:30.215090  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:30.252051  301425 cri.go:89] found id: ""
	I0729 13:41:30.252080  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.252088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:30.252094  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:30.252155  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:30.287210  301425 cri.go:89] found id: ""
	I0729 13:41:30.287240  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.287249  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:30.287254  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:30.287337  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:30.322813  301425 cri.go:89] found id: ""
	I0729 13:41:30.322842  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.322851  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:30.322857  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:30.322924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:30.358697  301425 cri.go:89] found id: ""
	I0729 13:41:30.358730  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.358738  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:30.358744  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:30.358804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:30.394252  301425 cri.go:89] found id: ""
	I0729 13:41:30.394283  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.394294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:30.394305  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:30.394321  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:30.446777  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:30.446820  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:30.461564  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:30.461605  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:30.537918  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:30.537942  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:30.537958  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:30.613821  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:30.613865  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:27.167654  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.666133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.323396  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:31.324718  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:30.825879  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:32.826458  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.154081  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:33.168252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:33.168353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:33.205675  301425 cri.go:89] found id: ""
	I0729 13:41:33.205708  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.205719  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:33.205727  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:33.205799  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:33.240556  301425 cri.go:89] found id: ""
	I0729 13:41:33.240582  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.240590  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:33.240596  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:33.240644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:33.276662  301425 cri.go:89] found id: ""
	I0729 13:41:33.276690  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.276698  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:33.276704  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:33.276773  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:33.318631  301425 cri.go:89] found id: ""
	I0729 13:41:33.318667  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.318677  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:33.318685  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:33.318762  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:33.354372  301425 cri.go:89] found id: ""
	I0729 13:41:33.354403  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.354412  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:33.354421  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:33.354475  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:33.389309  301425 cri.go:89] found id: ""
	I0729 13:41:33.389337  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.389346  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:33.389352  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:33.389404  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:33.423689  301425 cri.go:89] found id: ""
	I0729 13:41:33.423732  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.423745  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:33.423753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:33.423823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:33.457556  301425 cri.go:89] found id: ""
	I0729 13:41:33.457593  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.457605  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:33.457618  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:33.457634  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:33.534377  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:33.534416  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:33.579646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:33.579689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:33.629784  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:33.629819  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:33.643878  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:33.643912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:33.716446  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:32.167152  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:34.666054  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.667479  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.823726  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.824199  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.324827  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.325672  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.216598  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:36.229904  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:36.230003  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:36.263721  301425 cri.go:89] found id: ""
	I0729 13:41:36.263752  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.263771  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:36.263786  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:36.263838  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:36.297900  301425 cri.go:89] found id: ""
	I0729 13:41:36.297932  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.297950  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:36.297958  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:36.298023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:36.338037  301425 cri.go:89] found id: ""
	I0729 13:41:36.338064  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.338072  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:36.338078  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:36.338125  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:36.375334  301425 cri.go:89] found id: ""
	I0729 13:41:36.375362  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.375370  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:36.375375  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:36.375426  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:36.410760  301425 cri.go:89] found id: ""
	I0729 13:41:36.410794  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.410805  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:36.410813  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:36.410888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:36.445247  301425 cri.go:89] found id: ""
	I0729 13:41:36.445280  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.445291  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:36.445300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:36.445364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:36.487183  301425 cri.go:89] found id: ""
	I0729 13:41:36.487214  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.487221  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:36.487228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:36.487301  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:36.522407  301425 cri.go:89] found id: ""
	I0729 13:41:36.522433  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.522442  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:36.522453  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:36.522468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:36.537163  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:36.537197  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:36.608334  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:36.608361  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:36.608376  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:36.689026  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:36.689074  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:36.728580  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:36.728618  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.279605  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:39.293259  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:39.293320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:39.329070  301425 cri.go:89] found id: ""
	I0729 13:41:39.329095  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.329103  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:39.329109  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:39.329160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:39.362992  301425 cri.go:89] found id: ""
	I0729 13:41:39.363023  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.363032  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:39.363038  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:39.363100  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:39.403094  301425 cri.go:89] found id: ""
	I0729 13:41:39.403128  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.403140  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:39.403147  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:39.403201  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:39.435761  301425 cri.go:89] found id: ""
	I0729 13:41:39.435795  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.435806  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:39.435814  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:39.435881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:39.468299  301425 cri.go:89] found id: ""
	I0729 13:41:39.468332  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.468341  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:39.468349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:39.468417  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:39.505114  301425 cri.go:89] found id: ""
	I0729 13:41:39.505149  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.505162  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:39.505172  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:39.505234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:39.536942  301425 cri.go:89] found id: ""
	I0729 13:41:39.536975  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.536986  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:39.536994  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:39.537064  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:39.577394  301425 cri.go:89] found id: ""
	I0729 13:41:39.577427  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.577439  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:39.577451  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:39.577468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.631143  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:39.631184  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:39.645020  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:39.645047  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:39.718256  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:39.718283  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:39.718297  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:39.801990  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:39.802036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:39.166762  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.167646  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.824966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.825836  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.324009  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.327169  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.826091  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.347066  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:42.359902  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:42.359983  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:42.395494  301425 cri.go:89] found id: ""
	I0729 13:41:42.395529  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.395540  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:42.395548  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:42.395611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:42.429305  301425 cri.go:89] found id: ""
	I0729 13:41:42.429334  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.429343  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:42.429350  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:42.429401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:42.466902  301425 cri.go:89] found id: ""
	I0729 13:41:42.466931  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.466942  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:42.466949  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:42.467017  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:42.504582  301425 cri.go:89] found id: ""
	I0729 13:41:42.504618  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.504628  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:42.504652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:42.504717  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:42.539649  301425 cri.go:89] found id: ""
	I0729 13:41:42.539676  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.539686  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:42.539695  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:42.539758  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:42.579209  301425 cri.go:89] found id: ""
	I0729 13:41:42.579238  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.579249  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:42.579257  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:42.579320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:42.614832  301425 cri.go:89] found id: ""
	I0729 13:41:42.614861  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.614869  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:42.614874  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:42.614925  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:42.651837  301425 cri.go:89] found id: ""
	I0729 13:41:42.651865  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.651873  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:42.651883  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:42.651899  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:42.707149  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:42.707190  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:42.720990  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:42.721043  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:42.789818  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:42.789849  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:42.789867  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:42.871880  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:42.871934  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.416172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:45.428923  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:45.428994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:45.466667  301425 cri.go:89] found id: ""
	I0729 13:41:45.466699  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.466710  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:45.466717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:45.466783  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:45.501779  301425 cri.go:89] found id: ""
	I0729 13:41:45.501813  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.501825  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:45.501832  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:45.501896  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:45.537507  301425 cri.go:89] found id: ""
	I0729 13:41:45.537537  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.537547  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:45.537554  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:45.537619  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:45.575430  301425 cri.go:89] found id: ""
	I0729 13:41:45.575460  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.575467  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:45.575474  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:45.575523  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:45.613009  301425 cri.go:89] found id: ""
	I0729 13:41:45.613038  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.613047  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:45.613053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:45.613103  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:45.650734  301425 cri.go:89] found id: ""
	I0729 13:41:45.650767  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.650778  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:45.650786  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:45.650853  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:45.684301  301425 cri.go:89] found id: ""
	I0729 13:41:45.684332  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.684341  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:45.684349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:45.684416  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:45.719861  301425 cri.go:89] found id: ""
	I0729 13:41:45.719901  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.719911  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:45.719921  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:45.719936  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:45.800422  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:45.800464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.842460  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:45.842493  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:45.897388  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:45.897430  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:45.911554  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:45.911587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:41:43.665771  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.666196  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:44.325813  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:46.824774  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:43.828518  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.830106  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:48.325196  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	W0729 13:41:45.984435  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.485014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:48.498038  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:48.498110  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:48.534248  301425 cri.go:89] found id: ""
	I0729 13:41:48.534280  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.534291  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:48.534299  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:48.534362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:48.572411  301425 cri.go:89] found id: ""
	I0729 13:41:48.572445  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.572457  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:48.572465  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:48.572524  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:48.612345  301425 cri.go:89] found id: ""
	I0729 13:41:48.612373  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.612381  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:48.612387  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:48.612450  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:48.650334  301425 cri.go:89] found id: ""
	I0729 13:41:48.650385  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.650395  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:48.650401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:48.650466  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:48.687460  301425 cri.go:89] found id: ""
	I0729 13:41:48.687490  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.687501  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:48.687508  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:48.687572  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:48.735028  301425 cri.go:89] found id: ""
	I0729 13:41:48.735064  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.735077  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:48.735085  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:48.735142  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:48.771175  301425 cri.go:89] found id: ""
	I0729 13:41:48.771209  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.771220  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:48.771228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:48.771300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:48.808267  301425 cri.go:89] found id: ""
	I0729 13:41:48.808295  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.808304  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:48.808314  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:48.808328  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:48.850520  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:48.850557  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:48.902563  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:48.902612  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:48.919082  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:48.919114  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:48.999185  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.999213  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:48.999241  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:48.166020  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:49.323402  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.326596  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.825399  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:52.831823  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.579922  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:51.593149  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:51.593213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:51.626302  301425 cri.go:89] found id: ""
	I0729 13:41:51.626330  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.626338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:51.626344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:51.626393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:51.659551  301425 cri.go:89] found id: ""
	I0729 13:41:51.659578  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.659586  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:51.659592  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:51.659642  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:51.696842  301425 cri.go:89] found id: ""
	I0729 13:41:51.696868  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.696876  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:51.696882  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:51.696937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:51.737209  301425 cri.go:89] found id: ""
	I0729 13:41:51.737237  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.737246  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:51.737253  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:51.737317  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:51.772782  301425 cri.go:89] found id: ""
	I0729 13:41:51.772829  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.772842  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:51.772850  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:51.772921  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:51.806649  301425 cri.go:89] found id: ""
	I0729 13:41:51.806679  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.806690  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:51.806698  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:51.806771  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:51.848950  301425 cri.go:89] found id: ""
	I0729 13:41:51.848978  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.848989  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:51.848997  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:51.849065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:51.884875  301425 cri.go:89] found id: ""
	I0729 13:41:51.884902  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.884910  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:51.884920  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:51.884932  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:51.964282  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:51.964322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:52.004218  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:52.004251  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:52.056230  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:52.056266  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.069591  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:52.069622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:52.142552  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:54.643154  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:54.657199  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:54.657259  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:54.694124  301425 cri.go:89] found id: ""
	I0729 13:41:54.694152  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.694159  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:54.694165  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:54.694221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:54.732072  301425 cri.go:89] found id: ""
	I0729 13:41:54.732109  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.732119  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:54.732127  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:54.732194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:54.768257  301425 cri.go:89] found id: ""
	I0729 13:41:54.768294  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.768306  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:54.768314  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:54.768383  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:54.807596  301425 cri.go:89] found id: ""
	I0729 13:41:54.807631  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.807643  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:54.807651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:54.807716  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:54.845107  301425 cri.go:89] found id: ""
	I0729 13:41:54.845134  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.845142  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:54.845148  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:54.845197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:54.880627  301425 cri.go:89] found id: ""
	I0729 13:41:54.880655  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.880667  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:54.880675  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:54.880750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:54.918122  301425 cri.go:89] found id: ""
	I0729 13:41:54.918151  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.918159  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:54.918165  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:54.918219  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:54.956943  301425 cri.go:89] found id: ""
	I0729 13:41:54.956986  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.956999  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:54.957022  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:54.957036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:55.032512  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:55.032547  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:55.032564  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:55.116653  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:55.116699  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:55.177030  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:55.177059  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:55.238789  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:55.238831  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.166339  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:54.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:53.824694  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:56.324761  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:55.324698  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.326135  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.753504  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:57.766354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:57.766436  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:57.802691  301425 cri.go:89] found id: ""
	I0729 13:41:57.802728  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.802740  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:57.802746  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:57.802807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:57.839800  301425 cri.go:89] found id: ""
	I0729 13:41:57.839823  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.839830  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:57.839846  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:57.839902  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:57.881592  301425 cri.go:89] found id: ""
	I0729 13:41:57.881617  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.881625  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:57.881631  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:57.881681  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.916245  301425 cri.go:89] found id: ""
	I0729 13:41:57.916273  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.916282  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:57.916290  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:57.916346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:57.952224  301425 cri.go:89] found id: ""
	I0729 13:41:57.952261  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.952272  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:57.952280  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:57.952340  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:57.985508  301425 cri.go:89] found id: ""
	I0729 13:41:57.985537  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.985548  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:57.985557  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:57.985624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:58.022354  301425 cri.go:89] found id: ""
	I0729 13:41:58.022382  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.022391  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:58.022397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:58.022462  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:58.055865  301425 cri.go:89] found id: ""
	I0729 13:41:58.055891  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.055900  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:58.055914  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:58.055931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:58.069143  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:58.069177  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:58.143137  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:58.143164  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:58.143183  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:58.224631  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:58.224672  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:58.266437  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:58.266470  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:00.819300  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:00.834195  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:00.834258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:00.869660  301425 cri.go:89] found id: ""
	I0729 13:42:00.869697  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.869709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:00.869717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:00.869777  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:00.915601  301425 cri.go:89] found id: ""
	I0729 13:42:00.915630  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.915638  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:00.915644  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:00.915694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:00.956981  301425 cri.go:89] found id: ""
	I0729 13:42:00.957020  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.957028  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:00.957034  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:00.957094  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.166038  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.666455  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.666824  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:58.824729  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.825513  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.825074  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.826480  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.995761  301425 cri.go:89] found id: ""
	I0729 13:42:00.995793  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.995801  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:00.995817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:00.995869  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:01.047668  301425 cri.go:89] found id: ""
	I0729 13:42:01.047699  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.047707  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:01.047713  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:01.047787  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:01.085178  301425 cri.go:89] found id: ""
	I0729 13:42:01.085209  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.085217  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:01.085224  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:01.085278  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:01.125282  301425 cri.go:89] found id: ""
	I0729 13:42:01.125310  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.125320  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:01.125329  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:01.125396  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:01.165972  301425 cri.go:89] found id: ""
	I0729 13:42:01.166005  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.166021  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:01.166033  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:01.166049  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:01.236500  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:01.236523  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:01.236540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:01.320918  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:01.320959  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:01.366975  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:01.367015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:01.420347  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:01.420389  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:03.936048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:03.949603  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:03.949679  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:03.987529  301425 cri.go:89] found id: ""
	I0729 13:42:03.987557  301425 logs.go:276] 0 containers: []
	W0729 13:42:03.987567  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:03.987574  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:03.987639  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:04.027325  301425 cri.go:89] found id: ""
	I0729 13:42:04.027355  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.027365  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:04.027372  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:04.027437  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:04.063019  301425 cri.go:89] found id: ""
	I0729 13:42:04.063050  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.063059  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:04.063065  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:04.063117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:04.101106  301425 cri.go:89] found id: ""
	I0729 13:42:04.101135  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.101146  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:04.101153  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:04.101242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:04.137186  301425 cri.go:89] found id: ""
	I0729 13:42:04.137219  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.137230  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:04.137238  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:04.137302  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:04.175732  301425 cri.go:89] found id: ""
	I0729 13:42:04.175761  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.175770  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:04.175776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:04.175826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:04.213265  301425 cri.go:89] found id: ""
	I0729 13:42:04.213296  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.213307  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:04.213315  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:04.213381  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:04.248581  301425 cri.go:89] found id: ""
	I0729 13:42:04.248609  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.248617  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:04.248627  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:04.248643  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:04.303277  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:04.303400  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:04.317518  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:04.317547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:04.385209  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:04.385229  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:04.385242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:04.470629  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:04.470680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:04.167299  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.168006  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.324087  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:05.324904  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.826588  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.325326  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:08.326125  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.012455  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:07.028535  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:07.028621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:07.063453  301425 cri.go:89] found id: ""
	I0729 13:42:07.063496  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.063505  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:07.063511  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:07.063582  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:07.098243  301425 cri.go:89] found id: ""
	I0729 13:42:07.098274  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.098284  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:07.098291  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:07.098357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:07.138122  301425 cri.go:89] found id: ""
	I0729 13:42:07.138149  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.138157  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:07.138162  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:07.138213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:07.176772  301425 cri.go:89] found id: ""
	I0729 13:42:07.176814  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.176826  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:07.176835  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:07.176894  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:07.214867  301425 cri.go:89] found id: ""
	I0729 13:42:07.214898  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.214914  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:07.214920  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:07.214979  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:07.253443  301425 cri.go:89] found id: ""
	I0729 13:42:07.253471  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.253481  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:07.253490  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:07.253550  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:07.287284  301425 cri.go:89] found id: ""
	I0729 13:42:07.287326  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.287338  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:07.287349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:07.287411  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:07.330550  301425 cri.go:89] found id: ""
	I0729 13:42:07.330577  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.330588  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:07.330599  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:07.330620  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:07.384226  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:07.384268  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:07.398790  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:07.398817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:07.462868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:07.462893  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:07.462914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:07.538665  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:07.538706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.078452  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:10.091962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:10.092027  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:10.127401  301425 cri.go:89] found id: ""
	I0729 13:42:10.127434  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.127445  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:10.127454  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:10.127531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:10.161088  301425 cri.go:89] found id: ""
	I0729 13:42:10.161117  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.161127  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:10.161134  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:10.161187  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:10.199721  301425 cri.go:89] found id: ""
	I0729 13:42:10.199751  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.199763  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:10.199769  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:10.199821  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:10.237067  301425 cri.go:89] found id: ""
	I0729 13:42:10.237106  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.237120  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:10.237127  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:10.237191  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:10.275863  301425 cri.go:89] found id: ""
	I0729 13:42:10.275894  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.275909  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:10.275918  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:10.275981  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:10.313234  301425 cri.go:89] found id: ""
	I0729 13:42:10.313262  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.313270  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:10.313276  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:10.313334  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:10.353530  301425 cri.go:89] found id: ""
	I0729 13:42:10.353558  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.353569  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:10.353576  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:10.353644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:10.389488  301425 cri.go:89] found id: ""
	I0729 13:42:10.389516  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.389527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:10.389539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:10.389562  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.428705  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:10.428740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:10.484413  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:10.484456  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:10.499203  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:10.499248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:10.570868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:10.570894  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:10.570907  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:08.667158  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:11.166721  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.825638  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.324753  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.326752  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:12.826001  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:13.151788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:13.165297  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:13.165367  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:13.203752  301425 cri.go:89] found id: ""
	I0729 13:42:13.203786  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.203798  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:13.203805  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:13.203874  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:13.240454  301425 cri.go:89] found id: ""
	I0729 13:42:13.240491  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.240499  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:13.240504  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:13.240556  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:13.276508  301425 cri.go:89] found id: ""
	I0729 13:42:13.276536  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.276545  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:13.276553  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:13.276617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:13.311252  301425 cri.go:89] found id: ""
	I0729 13:42:13.311280  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.311291  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:13.311299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:13.311353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:13.351777  301425 cri.go:89] found id: ""
	I0729 13:42:13.351808  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.351817  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:13.351823  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:13.351881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:13.389020  301425 cri.go:89] found id: ""
	I0729 13:42:13.389049  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.389058  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:13.389064  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:13.389126  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:13.424353  301425 cri.go:89] found id: ""
	I0729 13:42:13.424387  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.424395  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:13.424401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:13.424451  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:13.460755  301425 cri.go:89] found id: ""
	I0729 13:42:13.460788  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.460817  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:13.460830  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:13.460850  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:13.500201  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:13.500234  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:13.553319  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:13.553357  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:13.567496  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:13.567529  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:13.644662  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:13.644686  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:13.644700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:13.667287  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.160289  301044 pod_ready.go:81] duration metric: took 4m0.000442608s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:16.160321  301044 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 13:42:16.160342  301044 pod_ready.go:38] duration metric: took 4m7.984743222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:16.160378  301044 kubeadm.go:597] duration metric: took 4m16.091281244s to restartPrimaryControlPlane
	W0729 13:42:16.160459  301044 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:16.160486  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:12.825387  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.826853  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.827679  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.829149  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326337  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326370  300746 pod_ready.go:81] duration metric: took 4m0.007721109s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:17.326383  300746 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:42:17.326392  300746 pod_ready.go:38] duration metric: took 4m8.417741792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:17.326410  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:42:17.326446  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:17.326514  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:17.373993  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.374027  300746 cri.go:89] found id: ""
	I0729 13:42:17.374037  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:17.374118  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.384841  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:17.384929  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:17.422219  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.422253  300746 cri.go:89] found id: ""
	I0729 13:42:17.422263  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:17.422349  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.427319  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:17.427385  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:17.469310  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:17.469336  300746 cri.go:89] found id: ""
	I0729 13:42:17.469347  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:17.469412  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.474501  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:17.474590  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:17.520767  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:17.520808  300746 cri.go:89] found id: ""
	I0729 13:42:17.520818  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:17.520881  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.525543  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:17.525643  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:17.572718  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.572749  300746 cri.go:89] found id: ""
	I0729 13:42:17.572758  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:17.572839  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.577227  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:17.577304  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:17.614076  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.614098  300746 cri.go:89] found id: ""
	I0729 13:42:17.614106  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:17.614153  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.618404  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:17.618479  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:17.666242  300746 cri.go:89] found id: ""
	I0729 13:42:17.666275  300746 logs.go:276] 0 containers: []
	W0729 13:42:17.666285  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:17.666301  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:17.666373  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:17.713379  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:17.713411  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:17.713418  300746 cri.go:89] found id: ""
	I0729 13:42:17.713428  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:17.713493  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.719026  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.723948  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:17.723974  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:17.743561  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:17.743607  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.803393  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:17.803425  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.855689  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:17.855723  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.898327  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:17.898361  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.951024  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:17.951060  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:18.014040  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:18.014082  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:18.159937  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:18.159984  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:18.201626  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:18.201667  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:18.247168  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:18.247211  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:18.291431  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:18.291469  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:18.333636  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:18.333671  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.226602  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:16.242934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:16.243005  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:16.284033  301425 cri.go:89] found id: ""
	I0729 13:42:16.284064  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.284075  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:16.284083  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:16.284152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:16.328362  301425 cri.go:89] found id: ""
	I0729 13:42:16.328388  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.328396  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:16.328402  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:16.328464  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:16.372664  301425 cri.go:89] found id: ""
	I0729 13:42:16.372701  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.372712  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:16.372727  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:16.372818  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:16.416085  301425 cri.go:89] found id: ""
	I0729 13:42:16.416119  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.416130  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:16.416138  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:16.416194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:16.457786  301425 cri.go:89] found id: ""
	I0729 13:42:16.457819  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.457830  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:16.457838  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:16.457903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:16.498929  301425 cri.go:89] found id: ""
	I0729 13:42:16.498962  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.498971  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:16.498979  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:16.499043  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:16.546159  301425 cri.go:89] found id: ""
	I0729 13:42:16.546187  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.546199  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:16.546207  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:16.546270  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:16.585010  301425 cri.go:89] found id: ""
	I0729 13:42:16.585041  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.585052  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:16.585065  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:16.585081  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:16.639033  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:16.639079  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:16.656209  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:16.656242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:16.734835  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:16.734863  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:16.734940  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.818756  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:16.818798  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.370796  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:19.384267  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:19.384354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:19.425595  301425 cri.go:89] found id: ""
	I0729 13:42:19.425629  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.425641  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:19.425650  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:19.425715  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:19.461470  301425 cri.go:89] found id: ""
	I0729 13:42:19.461506  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.461517  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:19.461524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:19.461592  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:19.508232  301425 cri.go:89] found id: ""
	I0729 13:42:19.508265  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.508275  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:19.508283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:19.508360  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:19.546226  301425 cri.go:89] found id: ""
	I0729 13:42:19.546259  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.546275  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:19.546283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:19.546354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:19.581125  301425 cri.go:89] found id: ""
	I0729 13:42:19.581156  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.581167  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:19.581176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:19.581242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:19.619680  301425 cri.go:89] found id: ""
	I0729 13:42:19.619719  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.619728  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:19.619736  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:19.619800  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:19.657096  301425 cri.go:89] found id: ""
	I0729 13:42:19.657126  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.657136  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:19.657142  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:19.657203  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:19.697247  301425 cri.go:89] found id: ""
	I0729 13:42:19.697277  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.697286  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:19.697297  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:19.697312  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:19.714900  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:19.714935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:19.794118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:19.794145  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:19.794161  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:19.907077  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:19.907122  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.949841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:19.949871  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:19.324474  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:21.826117  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:18.858720  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:18.858773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:21.419344  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:21.440121  300746 api_server.go:72] duration metric: took 4m17.790553991s to wait for apiserver process to appear ...
	I0729 13:42:21.440149  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:42:21.440190  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:21.440242  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:21.485874  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:21.485897  300746 cri.go:89] found id: ""
	I0729 13:42:21.485905  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:21.485956  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.490424  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:21.490493  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:21.532174  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:21.532202  300746 cri.go:89] found id: ""
	I0729 13:42:21.532211  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:21.532259  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.536561  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:21.536622  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:21.579375  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:21.579397  300746 cri.go:89] found id: ""
	I0729 13:42:21.579404  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:21.579450  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.584710  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:21.584779  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:21.621437  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.621465  300746 cri.go:89] found id: ""
	I0729 13:42:21.621475  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:21.621536  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.625829  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:21.625898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:21.666063  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:21.666086  300746 cri.go:89] found id: ""
	I0729 13:42:21.666095  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:21.666162  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.670822  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:21.670898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:21.713993  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:21.714022  300746 cri.go:89] found id: ""
	I0729 13:42:21.714032  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:21.714099  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.718967  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:21.719044  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:21.761282  300746 cri.go:89] found id: ""
	I0729 13:42:21.761312  300746 logs.go:276] 0 containers: []
	W0729 13:42:21.761320  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:21.761327  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:21.761390  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:21.810085  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:21.810114  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:21.810121  300746 cri.go:89] found id: ""
	I0729 13:42:21.810130  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:21.810185  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.814713  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.819968  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:21.819996  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:21.834798  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:21.834823  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:21.957963  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:21.958000  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.995345  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:21.995376  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:22.037737  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:22.037773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:22.074774  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:22.074813  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:22.123172  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.123205  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.181432  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:22.181473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:22.237128  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:22.237162  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:22.285733  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:22.285766  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:22.328258  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:22.328291  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:22.381239  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.381276  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:22.840466  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:22.840504  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:22.515296  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:22.529187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:22.529286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:22.573033  301425 cri.go:89] found id: ""
	I0729 13:42:22.573070  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.573082  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:22.573091  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:22.573152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:22.608443  301425 cri.go:89] found id: ""
	I0729 13:42:22.608476  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.608489  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:22.608496  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:22.608566  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:22.641672  301425 cri.go:89] found id: ""
	I0729 13:42:22.641704  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.641716  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:22.641724  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:22.641781  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:22.673902  301425 cri.go:89] found id: ""
	I0729 13:42:22.673934  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.673944  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:22.673952  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:22.674012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:22.715131  301425 cri.go:89] found id: ""
	I0729 13:42:22.715165  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.715179  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:22.715187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:22.715251  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:22.748807  301425 cri.go:89] found id: ""
	I0729 13:42:22.748838  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.748848  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:22.748856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:22.748924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:22.781972  301425 cri.go:89] found id: ""
	I0729 13:42:22.782002  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.782012  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:22.782021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:22.782088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:22.815791  301425 cri.go:89] found id: ""
	I0729 13:42:22.815823  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.815834  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:22.815848  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.815864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.873595  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:22.873631  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:22.888081  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:22.888123  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:22.959873  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:22.959899  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.959912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:23.040996  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:23.041035  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:25.585159  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:25.604154  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.604240  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.645428  301425 cri.go:89] found id: ""
	I0729 13:42:25.645459  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.645466  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:25.645474  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.645534  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.682758  301425 cri.go:89] found id: ""
	I0729 13:42:25.682785  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.682793  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:25.682799  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.682864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.724297  301425 cri.go:89] found id: ""
	I0729 13:42:25.724330  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.724341  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:25.724349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.724401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.761124  301425 cri.go:89] found id: ""
	I0729 13:42:25.761157  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.761168  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:25.761177  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.761229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.802698  301425 cri.go:89] found id: ""
	I0729 13:42:25.802728  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.802741  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:25.802750  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.802804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.840472  301425 cri.go:89] found id: ""
	I0729 13:42:25.840499  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.840509  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:25.840516  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.840586  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.875217  301425 cri.go:89] found id: ""
	I0729 13:42:25.875255  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.875267  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.875273  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:25.875345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:25.919895  301425 cri.go:89] found id: ""
	I0729 13:42:25.919937  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.919948  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:25.919963  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.919988  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:24.324138  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:26.324843  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:25.399606  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:42:25.405339  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:42:25.406585  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:42:25.406607  300746 api_server.go:131] duration metric: took 3.966451518s to wait for apiserver health ...
	I0729 13:42:25.406615  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:42:25.406640  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.406686  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.442039  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:25.442068  300746 cri.go:89] found id: ""
	I0729 13:42:25.442079  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:25.442140  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.446769  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.446830  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.482122  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:25.482144  300746 cri.go:89] found id: ""
	I0729 13:42:25.482156  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:25.482211  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.486666  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.486729  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.534553  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:25.534584  300746 cri.go:89] found id: ""
	I0729 13:42:25.534595  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:25.534657  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.539546  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.539624  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.577538  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.577562  300746 cri.go:89] found id: ""
	I0729 13:42:25.577572  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:25.577635  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.582377  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.582457  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.628918  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:25.628945  300746 cri.go:89] found id: ""
	I0729 13:42:25.628955  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:25.629027  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.633502  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.633592  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.673133  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.673156  300746 cri.go:89] found id: ""
	I0729 13:42:25.673163  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:25.673210  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.677905  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.677994  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.724757  300746 cri.go:89] found id: ""
	I0729 13:42:25.724780  300746 logs.go:276] 0 containers: []
	W0729 13:42:25.724805  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.724813  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:25.724887  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:25.775101  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.775130  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:25.775136  300746 cri.go:89] found id: ""
	I0729 13:42:25.775144  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:25.775219  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.782008  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.787032  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:25.787064  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.834985  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:25.835026  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.897295  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:25.897338  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.938020  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.938053  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:26.002775  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:26.002808  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:26.021431  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:26.021473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:26.071861  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:26.071898  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:26.130018  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:26.130057  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:26.170233  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:26.170290  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:26.207687  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.207718  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.600518  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:26.600575  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:26.707024  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:26.707074  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:26.753205  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.753240  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:29.302597  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:42:29.302626  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.302630  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.302634  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.302638  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.302641  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.302644  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.302649  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.302654  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.302661  300746 system_pods.go:74] duration metric: took 3.896040202s to wait for pod list to return data ...
	I0729 13:42:29.302670  300746 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:42:29.305640  300746 default_sa.go:45] found service account: "default"
	I0729 13:42:29.305668  300746 default_sa.go:55] duration metric: took 2.989028ms for default service account to be created ...
	I0729 13:42:29.305679  300746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:42:29.310472  300746 system_pods.go:86] 8 kube-system pods found
	I0729 13:42:29.310495  300746 system_pods.go:89] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.310500  300746 system_pods.go:89] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.310505  300746 system_pods.go:89] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.310509  300746 system_pods.go:89] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.310513  300746 system_pods.go:89] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.310517  300746 system_pods.go:89] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.310523  300746 system_pods.go:89] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.310528  300746 system_pods.go:89] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.310536  300746 system_pods.go:126] duration metric: took 4.851477ms to wait for k8s-apps to be running ...
	I0729 13:42:29.310545  300746 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:42:29.310580  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.329123  300746 system_svc.go:56] duration metric: took 18.569258ms WaitForService to wait for kubelet
	I0729 13:42:29.329155  300746 kubeadm.go:582] duration metric: took 4m25.679589837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:42:29.329182  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:42:29.332696  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:42:29.332726  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:42:29.332741  300746 node_conditions.go:105] duration metric: took 3.551684ms to run NodePressure ...
	I0729 13:42:29.332756  300746 start.go:241] waiting for startup goroutines ...
	I0729 13:42:29.332770  300746 start.go:246] waiting for cluster config update ...
	I0729 13:42:29.332784  300746 start.go:255] writing updated cluster config ...
	I0729 13:42:29.333168  300746 ssh_runner.go:195] Run: rm -f paused
	I0729 13:42:29.394738  300746 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 13:42:29.396826  300746 out.go:177] * Done! kubectl is now configured to use "no-preload-566777" cluster and "default" namespace by default
	I0729 13:42:25.981964  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:25.982005  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:25.997546  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:25.997576  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:26.075879  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:26.075901  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.075917  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.158552  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.158593  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:28.704328  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:28.718946  301425 kubeadm.go:597] duration metric: took 4m3.546660825s to restartPrimaryControlPlane
	W0729 13:42:28.719041  301425 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:28.719086  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:29.251866  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.267009  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:29.277498  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:29.287980  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:29.288003  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:29.288054  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:42:29.297830  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:29.297890  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:29.308263  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:42:29.318332  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:29.318388  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:29.328684  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.339841  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:29.339894  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.351304  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:42:29.363901  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:29.363960  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:29.377255  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:29.453113  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:42:29.453212  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:29.609835  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:29.609970  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:29.610106  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:29.812529  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:29.814455  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:29.814551  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:29.814633  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:29.814727  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:29.814799  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:29.814915  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:29.814979  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:29.815695  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:29.816098  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:29.816602  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:29.817114  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:29.817184  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:29.817266  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:30.122967  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:30.287162  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:30.336346  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:30.516317  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:30.532829  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:30.533732  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:30.533809  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:30.672345  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:30.674334  301425 out.go:204]   - Booting up control plane ...
	I0729 13:42:30.674492  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:30.681661  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:30.681784  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:30.683350  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:30.687290  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:42:28.327998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:30.823998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:32.824105  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:34.825475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:37.324435  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:39.824490  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:42.323305  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:44.329376  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:46.823645  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:47.980926  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.820407091s)
	I0729 13:42:47.981010  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:47.997344  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:48.007813  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:48.017519  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:48.017538  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:48.017579  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:42:48.028739  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:48.028819  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:48.038417  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:42:48.047864  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:48.047921  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:48.057408  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.066977  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:48.067040  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.077017  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:42:48.087204  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:48.087267  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:48.097659  301044 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:48.149712  301044 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:42:48.149883  301044 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:48.277280  301044 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:48.277441  301044 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:48.277578  301044 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:48.505523  301044 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:48.507718  301044 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:48.507827  301044 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:48.507941  301044 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:48.508049  301044 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:48.508139  301044 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:48.508245  301044 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:48.508334  301044 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:48.508431  301044 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:48.508518  301044 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:48.508622  301044 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:48.508740  301044 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:48.508824  301044 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:48.508949  301044 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:48.545220  301044 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:48.620528  301044 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:42:48.781015  301044 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:49.039301  301044 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:49.104540  301044 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:49.105022  301044 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:49.107524  301044 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:49.109579  301044 out.go:204]   - Booting up control plane ...
	I0729 13:42:49.109698  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:49.109836  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:49.109924  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:49.129789  301044 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:49.130766  301044 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:49.130844  301044 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:49.272901  301044 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:42:49.273017  301044 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:42:50.274804  301044 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001903151s
	I0729 13:42:50.274906  301044 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:42:48.825621  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:51.324025  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.276427  301044 kubeadm.go:310] [api-check] The API server is healthy after 5.001280529s
	I0729 13:42:55.289666  301044 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:42:55.309747  301044 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:42:55.343304  301044 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:42:55.343537  301044 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-972693 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:42:55.366319  301044 kubeadm.go:310] [bootstrap-token] Using token: bvsox4.ktqddck1jfi3aduz
	I0729 13:42:55.367592  301044 out.go:204]   - Configuring RBAC rules ...
	I0729 13:42:55.367695  301044 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:42:55.380118  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:42:55.393704  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:42:55.397859  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:42:55.401567  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:42:55.407851  301044 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:42:55.684714  301044 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:42:56.128597  301044 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:42:56.683879  301044 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:42:56.685050  301044 kubeadm.go:310] 
	I0729 13:42:56.685127  301044 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:42:56.685137  301044 kubeadm.go:310] 
	I0729 13:42:56.685216  301044 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:42:56.685226  301044 kubeadm.go:310] 
	I0729 13:42:56.685252  301044 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:42:56.685335  301044 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:42:56.685414  301044 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:42:56.685422  301044 kubeadm.go:310] 
	I0729 13:42:56.685527  301044 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:42:56.685550  301044 kubeadm.go:310] 
	I0729 13:42:56.685607  301044 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:42:56.685617  301044 kubeadm.go:310] 
	I0729 13:42:56.685684  301044 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:42:56.685800  301044 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:42:56.685916  301044 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:42:56.685933  301044 kubeadm.go:310] 
	I0729 13:42:56.686048  301044 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:42:56.686149  301044 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:42:56.686162  301044 kubeadm.go:310] 
	I0729 13:42:56.686277  301044 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686416  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 \
	I0729 13:42:56.686449  301044 kubeadm.go:310] 	--control-plane 
	I0729 13:42:56.686462  301044 kubeadm.go:310] 
	I0729 13:42:56.686562  301044 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:42:56.686571  301044 kubeadm.go:310] 
	I0729 13:42:56.686687  301044 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686839  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 
	I0729 13:42:56.687046  301044 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:42:56.687123  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:42:56.687140  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:42:56.689013  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:42:53.324453  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.326475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:56.690282  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:42:56.703026  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:42:56.722677  301044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-972693 minikube.k8s.io/updated_at=2024_07_29T13_42_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=default-k8s-diff-port-972693 minikube.k8s.io/primary=true
	I0729 13:42:56.738921  301044 ops.go:34] apiserver oom_adj: -16
	I0729 13:42:56.902369  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.402842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.902902  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.403358  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.903112  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.402540  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.902605  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.402440  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.903011  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:01.403295  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.823966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:00.323772  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:01.818493  300705 pod_ready.go:81] duration metric: took 4m0.000972043s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:43:01.818528  300705 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:43:01.818537  300705 pod_ready.go:38] duration metric: took 4m4.037818748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:01.818555  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:01.818589  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:01.818643  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:01.874334  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:01.874359  300705 cri.go:89] found id: ""
	I0729 13:43:01.874369  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:01.874439  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.879122  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:01.879214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:01.919779  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:01.919804  300705 cri.go:89] found id: ""
	I0729 13:43:01.919814  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:01.919874  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.924895  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:01.924963  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:01.970365  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:01.970386  300705 cri.go:89] found id: ""
	I0729 13:43:01.970394  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:01.970444  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.975331  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:01.975409  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:02.013029  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.013062  300705 cri.go:89] found id: ""
	I0729 13:43:02.013074  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:02.013136  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.017958  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:02.018019  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:02.062357  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.062385  300705 cri.go:89] found id: ""
	I0729 13:43:02.062394  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:02.062463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.066791  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:02.066841  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:02.103790  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:02.103812  300705 cri.go:89] found id: ""
	I0729 13:43:02.103821  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:02.103882  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.108242  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:02.108293  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:02.151089  300705 cri.go:89] found id: ""
	I0729 13:43:02.151122  300705 logs.go:276] 0 containers: []
	W0729 13:43:02.151133  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:02.151141  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:02.151204  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:02.205700  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:02.205727  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.205732  300705 cri.go:89] found id: ""
	I0729 13:43:02.205741  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:02.205790  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.210332  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.214889  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:02.214913  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:02.229589  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:02.229621  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:02.278361  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:02.278394  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:02.319117  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:02.319146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.357874  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:02.357908  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.402114  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:02.402146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.442480  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:02.442514  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:01.903256  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.403400  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.902925  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.402616  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.903161  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.403255  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.902489  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.402506  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.902530  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:06.402436  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.953914  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:02.953961  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:03.013404  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:03.013441  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:03.151261  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:03.151294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:03.199910  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:03.199964  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:03.257103  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:03.257137  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:03.308519  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:03.308559  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:05.857929  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:05.878306  300705 api_server.go:72] duration metric: took 4m15.820258046s to wait for apiserver process to appear ...
	I0729 13:43:05.878338  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:05.878383  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:05.878451  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:05.924031  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:05.924071  300705 cri.go:89] found id: ""
	I0729 13:43:05.924083  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:05.924151  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.929284  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:05.929363  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:05.968980  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:05.969003  300705 cri.go:89] found id: ""
	I0729 13:43:05.969010  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:05.969056  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.973451  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:05.973516  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:06.011760  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.011784  300705 cri.go:89] found id: ""
	I0729 13:43:06.011794  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:06.011857  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.016065  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:06.016132  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:06.066319  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.066345  300705 cri.go:89] found id: ""
	I0729 13:43:06.066353  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:06.066420  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.071060  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:06.071120  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:06.117383  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.117405  300705 cri.go:89] found id: ""
	I0729 13:43:06.117413  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:06.117463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.121968  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:06.122053  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:06.156125  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.156151  300705 cri.go:89] found id: ""
	I0729 13:43:06.156160  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:06.156209  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.160301  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:06.160366  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:06.206751  300705 cri.go:89] found id: ""
	I0729 13:43:06.206780  300705 logs.go:276] 0 containers: []
	W0729 13:43:06.206790  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:06.206798  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:06.206860  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:06.248884  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.248918  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:06.248925  300705 cri.go:89] found id: ""
	I0729 13:43:06.248936  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:06.249006  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.253087  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.257229  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:06.257252  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.291495  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:06.291528  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.330190  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:06.330219  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.366500  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:06.366536  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.424871  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:06.424906  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:06.855025  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:06.855069  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:06.870025  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:06.870055  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:06.986590  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:06.986630  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:07.036972  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:07.037007  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:07.092602  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:07.092646  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:07.135326  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:07.135366  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:07.190208  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:07.190247  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:07.241865  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:07.241896  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.902842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.402861  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.903148  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.402619  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.902869  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.403349  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.903277  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.402468  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.535843  301044 kubeadm.go:1113] duration metric: took 13.813154738s to wait for elevateKubeSystemPrivileges
	I0729 13:43:10.535879  301044 kubeadm.go:394] duration metric: took 5m10.527995876s to StartCluster
	I0729 13:43:10.535899  301044 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.535991  301044 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:43:10.538845  301044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.539141  301044 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:43:10.539343  301044 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:43:10.539513  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:43:10.539528  301044 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539556  301044 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539574  301044 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-972693"
	I0729 13:43:10.539587  301044 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-972693"
	I0729 13:43:10.539600  301044 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539623  301044 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.539635  301044 addons.go:243] addon metrics-server should already be in state true
	I0729 13:43:10.539692  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	W0729 13:43:10.539594  301044 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:43:10.539817  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.540342  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540368  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540380  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540399  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540664  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540814  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.542249  301044 out.go:177] * Verifying Kubernetes components...
	I0729 13:43:10.543974  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:43:10.561555  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0729 13:43:10.561585  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I0729 13:43:10.561820  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0729 13:43:10.562096  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562160  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562579  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562694  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562711  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.562750  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562766  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563224  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563236  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563496  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.563516  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563793  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563923  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.563959  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563982  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.564526  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.564781  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.569041  301044 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.569062  301044 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:43:10.569091  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.569443  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.569462  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.580340  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I0729 13:43:10.580852  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.581371  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.581384  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.581724  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.581911  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.583937  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I0729 13:43:10.584108  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.584422  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.584864  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.584881  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.585262  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.585445  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.586285  301044 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:43:10.586973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.587855  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:43:10.587873  301044 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:43:10.587907  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.588885  301044 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:43:10.689091  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:43:10.689558  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:10.689837  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:10.590240  301044 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.590258  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:43:10.590275  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.592026  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42695
	I0729 13:43:10.592306  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.592778  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.592859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.592877  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.593162  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.593295  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.593382  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.593455  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.593663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594055  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.594082  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594233  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.594388  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.594485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.594621  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.594882  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.594892  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.595227  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.595663  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.595680  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.611094  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0729 13:43:10.611617  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.612200  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.612224  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.612600  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.612973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.614541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.614743  301044 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:10.614757  301044 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:43:10.614774  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.617611  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.618064  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.618416  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.618595  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.618754  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.791924  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:43:10.850744  301044 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866102  301044 node_ready.go:49] node "default-k8s-diff-port-972693" has status "Ready":"True"
	I0729 13:43:10.866137  301044 node_ready.go:38] duration metric: took 15.35404ms for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866171  301044 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:10.877661  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:10.958120  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.981335  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:43:10.981363  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:43:10.982804  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:11.145078  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:43:11.145108  301044 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:43:11.236628  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:11.236658  301044 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:43:11.308646  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.315025489s)
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290345752s)
	I0729 13:43:12.273254  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273270  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273283  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273296  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273572  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273589  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273598  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273606  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273704  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273721  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273731  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273739  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.275558  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275601  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275616  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.275624  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275634  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275644  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.309442  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.309473  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.309839  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.309888  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.309909  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.464546  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.155855113s)
	I0729 13:43:12.464601  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.464614  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465037  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465060  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465071  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.465081  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465398  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.465418  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465476  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465494  301044 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-972693"
	I0729 13:43:12.467315  301044 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:43:09.811571  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:43:09.817221  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:43:09.818319  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:09.818342  300705 api_server.go:131] duration metric: took 3.939996032s to wait for apiserver health ...
	I0729 13:43:09.818350  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:09.818373  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:09.818425  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:09.861856  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:09.861883  300705 cri.go:89] found id: ""
	I0729 13:43:09.861894  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:09.861962  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.867142  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:09.867216  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:09.909767  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:09.909795  300705 cri.go:89] found id: ""
	I0729 13:43:09.909808  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:09.909877  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.914410  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:09.914482  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:09.953540  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:09.953568  300705 cri.go:89] found id: ""
	I0729 13:43:09.953578  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:09.953637  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.958140  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:09.958214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:09.999809  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:09.999836  300705 cri.go:89] found id: ""
	I0729 13:43:09.999846  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:09.999911  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.004505  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:10.004587  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:10.049146  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.049173  300705 cri.go:89] found id: ""
	I0729 13:43:10.049182  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:10.049252  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.053631  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:10.053698  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:10.090361  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.090386  300705 cri.go:89] found id: ""
	I0729 13:43:10.090396  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:10.090442  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.095528  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:10.095588  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:10.131892  300705 cri.go:89] found id: ""
	I0729 13:43:10.131925  300705 logs.go:276] 0 containers: []
	W0729 13:43:10.131937  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:10.131944  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:10.132008  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:10.169101  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.169127  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.169133  300705 cri.go:89] found id: ""
	I0729 13:43:10.169142  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:10.169203  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.174716  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.179196  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:10.179217  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:10.222803  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:10.222833  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:10.265944  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:10.265975  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.310266  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:10.310294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.370562  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:10.370611  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.415759  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:10.415803  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:10.467672  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:10.467702  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:10.531249  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:10.531293  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:10.550454  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:10.550485  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:10.709028  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:10.709068  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:10.761048  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:10.761093  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:10.813125  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:10.813169  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.852581  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:10.852608  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:13.725236  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:43:13.725272  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.725279  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.725284  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.725289  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.725293  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.725298  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.725306  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.725312  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.725322  300705 system_pods.go:74] duration metric: took 3.906966083s to wait for pod list to return data ...
	I0729 13:43:13.725335  300705 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:13.727954  300705 default_sa.go:45] found service account: "default"
	I0729 13:43:13.727984  300705 default_sa.go:55] duration metric: took 2.638639ms for default service account to be created ...
	I0729 13:43:13.728032  300705 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:13.733141  300705 system_pods.go:86] 8 kube-system pods found
	I0729 13:43:13.733163  300705 system_pods.go:89] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.733169  300705 system_pods.go:89] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.733173  300705 system_pods.go:89] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.733177  300705 system_pods.go:89] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.733181  300705 system_pods.go:89] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.733185  300705 system_pods.go:89] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.733191  300705 system_pods.go:89] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.733196  300705 system_pods.go:89] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.733205  300705 system_pods.go:126] duration metric: took 5.16021ms to wait for k8s-apps to be running ...
	I0729 13:43:13.733213  300705 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:13.733255  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:13.755011  300705 system_svc.go:56] duration metric: took 21.784065ms WaitForService to wait for kubelet
	I0729 13:43:13.755042  300705 kubeadm.go:582] duration metric: took 4m23.697000108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:13.755068  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:13.758549  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:13.758572  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:13.758586  300705 node_conditions.go:105] duration metric: took 3.512205ms to run NodePressure ...
	I0729 13:43:13.758601  300705 start.go:241] waiting for startup goroutines ...
	I0729 13:43:13.758612  300705 start.go:246] waiting for cluster config update ...
	I0729 13:43:13.758625  300705 start.go:255] writing updated cluster config ...
	I0729 13:43:13.758945  300705 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:13.810333  300705 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:13.812397  300705 out.go:177] * Done! kubectl is now configured to use "embed-certs-135920" cluster and "default" namespace by default
	I0729 13:43:12.468541  301044 addons.go:510] duration metric: took 1.929219306s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:43:12.887280  301044 pod_ready.go:102] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:13.386255  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.386279  301044 pod_ready.go:81] duration metric: took 2.508586907s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.386291  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391278  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.391302  301044 pod_ready.go:81] duration metric: took 5.00403ms for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391313  301044 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396324  301044 pod_ready.go:92] pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.396343  301044 pod_ready.go:81] duration metric: took 5.022707ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396350  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403008  301044 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.403026  301044 pod_ready.go:81] duration metric: took 6.670677ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403035  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407836  301044 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.407856  301044 pod_ready.go:81] duration metric: took 4.814401ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407868  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783140  301044 pod_ready.go:92] pod "kube-proxy-tfsk9" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.783168  301044 pod_ready.go:81] duration metric: took 375.291599ms for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783181  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182560  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:14.182588  301044 pod_ready.go:81] duration metric: took 399.399691ms for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182597  301044 pod_ready.go:38] duration metric: took 3.316409576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:14.182610  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:14.182661  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:14.210715  301044 api_server.go:72] duration metric: took 3.671529553s to wait for apiserver process to appear ...
	I0729 13:43:14.210749  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:14.210790  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:43:14.214886  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:43:14.215773  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:14.215795  301044 api_server.go:131] duration metric: took 5.0389ms to wait for apiserver health ...
	I0729 13:43:14.215802  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:14.386356  301044 system_pods.go:59] 9 kube-system pods found
	I0729 13:43:14.386389  301044 system_pods.go:61] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.386394  301044 system_pods.go:61] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.386398  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.386401  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.386405  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.386409  301044 system_pods.go:61] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.386412  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.386417  301044 system_pods.go:61] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.386420  301044 system_pods.go:61] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.386430  301044 system_pods.go:74] duration metric: took 170.622271ms to wait for pod list to return data ...
	I0729 13:43:14.386437  301044 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:14.582618  301044 default_sa.go:45] found service account: "default"
	I0729 13:43:14.582643  301044 default_sa.go:55] duration metric: took 196.19918ms for default service account to be created ...
	I0729 13:43:14.582652  301044 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:14.785669  301044 system_pods.go:86] 9 kube-system pods found
	I0729 13:43:14.785701  301044 system_pods.go:89] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.785707  301044 system_pods.go:89] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.785711  301044 system_pods.go:89] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.785719  301044 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.785723  301044 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.785727  301044 system_pods.go:89] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.785731  301044 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.785737  301044 system_pods.go:89] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.785741  301044 system_pods.go:89] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.785750  301044 system_pods.go:126] duration metric: took 203.092668ms to wait for k8s-apps to be running ...
	I0729 13:43:14.785756  301044 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:14.785801  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:14.802927  301044 system_svc.go:56] duration metric: took 17.160927ms WaitForService to wait for kubelet
	I0729 13:43:14.802957  301044 kubeadm.go:582] duration metric: took 4.263780375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:14.802977  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:14.983106  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:14.983135  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:14.983146  301044 node_conditions.go:105] duration metric: took 180.164781ms to run NodePressure ...
	I0729 13:43:14.983159  301044 start.go:241] waiting for startup goroutines ...
	I0729 13:43:14.983165  301044 start.go:246] waiting for cluster config update ...
	I0729 13:43:14.983175  301044 start.go:255] writing updated cluster config ...
	I0729 13:43:14.983443  301044 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:15.038438  301044 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:15.040318  301044 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-972693" cluster and "default" namespace by default
	I0729 13:43:15.690809  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:15.691011  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:25.691962  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:25.692244  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:45.693269  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:45.693473  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696107  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:44:25.696300  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696307  301425 kubeadm.go:310] 
	I0729 13:44:25.696341  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:44:25.696400  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:44:25.696419  301425 kubeadm.go:310] 
	I0729 13:44:25.696463  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:44:25.696510  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:44:25.696653  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:44:25.696674  301425 kubeadm.go:310] 
	I0729 13:44:25.696818  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:44:25.696868  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:44:25.696921  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:44:25.696930  301425 kubeadm.go:310] 
	I0729 13:44:25.697076  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:44:25.697192  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:44:25.697206  301425 kubeadm.go:310] 
	I0729 13:44:25.697349  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:44:25.697459  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:44:25.697568  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:44:25.697669  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:44:25.697680  301425 kubeadm.go:310] 
	I0729 13:44:25.698359  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:44:25.698490  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:44:25.698596  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 13:44:25.698771  301425 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 13:44:25.698848  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:44:26.160539  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:44:26.175482  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:44:26.185562  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:44:26.185593  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:44:26.185657  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:44:26.195781  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:44:26.195865  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:44:26.207404  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:44:26.217068  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:44:26.217188  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:44:26.226075  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.234622  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:44:26.234684  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.243756  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:44:26.252630  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:44:26.252695  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:44:26.262846  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:44:26.340215  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:44:26.340318  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:44:26.496049  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:44:26.496199  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:44:26.496327  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:44:26.678135  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:44:26.680089  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:44:26.680173  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:44:26.680257  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:44:26.680378  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:44:26.680470  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:44:26.680570  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:44:26.680653  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:44:26.680751  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:44:26.681022  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:44:26.681519  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:44:26.681876  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:44:26.681994  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:44:26.682083  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:44:26.762680  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:44:26.922517  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:44:26.973731  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:44:27.193064  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:44:27.216477  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:44:27.219036  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:44:27.219293  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:44:27.386424  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:44:27.388194  301425 out.go:204]   - Booting up control plane ...
	I0729 13:44:27.388340  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:44:27.390345  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:44:27.391455  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:44:27.392303  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:44:27.394301  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:45:07.396989  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:45:07.397449  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:07.397719  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:12.397982  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:12.398297  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:22.398751  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:22.399010  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:42.399462  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:42.399675  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398413  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:46:22.398684  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398700  301425 kubeadm.go:310] 
	I0729 13:46:22.398763  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:46:22.398844  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:46:22.398886  301425 kubeadm.go:310] 
	I0729 13:46:22.398948  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:46:22.399002  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:46:22.399132  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:46:22.399145  301425 kubeadm.go:310] 
	I0729 13:46:22.399287  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:46:22.399346  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:46:22.399392  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:46:22.399404  301425 kubeadm.go:310] 
	I0729 13:46:22.399530  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:46:22.399610  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:46:22.399617  301425 kubeadm.go:310] 
	I0729 13:46:22.399735  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:46:22.399844  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:46:22.399943  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:46:22.400021  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:46:22.400035  301425 kubeadm.go:310] 
	I0729 13:46:22.400291  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:46:22.400370  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:46:22.400440  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 13:46:22.400520  301425 kubeadm.go:394] duration metric: took 7m57.286753846s to StartCluster
	I0729 13:46:22.400612  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:46:22.400692  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:46:22.446188  301425 cri.go:89] found id: ""
	I0729 13:46:22.446216  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.446225  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:46:22.446232  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:46:22.446289  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:46:22.484089  301425 cri.go:89] found id: ""
	I0729 13:46:22.484118  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.484128  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:46:22.484135  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:46:22.484197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:46:22.526817  301425 cri.go:89] found id: ""
	I0729 13:46:22.526846  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.526854  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:46:22.526860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:46:22.526912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:46:22.564787  301425 cri.go:89] found id: ""
	I0729 13:46:22.564834  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.564846  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:46:22.564854  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:46:22.564920  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:46:22.601843  301425 cri.go:89] found id: ""
	I0729 13:46:22.601881  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.601892  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:46:22.601900  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:46:22.601980  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:46:22.637420  301425 cri.go:89] found id: ""
	I0729 13:46:22.637448  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.637455  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:46:22.637462  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:46:22.637519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:46:22.672427  301425 cri.go:89] found id: ""
	I0729 13:46:22.672465  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.672476  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:46:22.672485  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:46:22.672549  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:46:22.708256  301425 cri.go:89] found id: ""
	I0729 13:46:22.708285  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.708294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:46:22.708306  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:46:22.708323  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:46:22.819287  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:46:22.819327  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:46:22.859298  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:46:22.859339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:46:22.914290  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:46:22.914342  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:46:22.936919  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:46:22.936951  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:46:23.035889  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0729 13:46:23.035939  301425 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 13:46:23.035991  301425 out.go:239] * 
	W0729 13:46:23.036103  301425 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.036137  301425 out.go:239] * 
	W0729 13:46:23.037370  301425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:46:23.040573  301425 out.go:177] 
	W0729 13:46:23.042130  301425 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.042173  301425 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 13:46:23.042193  301425 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 13:46:23.043539  301425 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.888293598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260784888273797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9961c7f3-7f37-4779-80e8-7c3c49529b6c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.888730605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=381d3910-15da-45dd-91c8-4cd21696207a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.888832297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=381d3910-15da-45dd-91c8-4cd21696207a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.888864003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=381d3910-15da-45dd-91c8-4cd21696207a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.921370931Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6f199f8-651e-4823-9cac-8b26597215dc name=/runtime.v1.RuntimeService/Version
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.921456922Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6f199f8-651e-4823-9cac-8b26597215dc name=/runtime.v1.RuntimeService/Version
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.922481038Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9aa8dc49-bc1c-4f06-bc49-f56d88d23d41 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.922910513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260784922890077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9aa8dc49-bc1c-4f06-bc49-f56d88d23d41 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.923460780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=915b7e17-ff08-47f1-97d7-11858cd3fcb9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.923527509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=915b7e17-ff08-47f1-97d7-11858cd3fcb9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.923563579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=915b7e17-ff08-47f1-97d7-11858cd3fcb9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.957384353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99b9f2cc-0d9a-4304-bb0b-77116f0342f7 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.957468980Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99b9f2cc-0d9a-4304-bb0b-77116f0342f7 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.959004423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04253835-dc6e-417f-9e98-ddc532dc5cf1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.959411638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260784959378823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04253835-dc6e-417f-9e98-ddc532dc5cf1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.960046081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bba1d763-a241-44af-9460-b47037230ed0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.960108843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bba1d763-a241-44af-9460-b47037230ed0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.960141012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bba1d763-a241-44af-9460-b47037230ed0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.994390040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dacb172f-f89c-4d0b-8629-0fe967daa138 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.994487763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dacb172f-f89c-4d0b-8629-0fe967daa138 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.995882790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd4cd1bd-c0dc-4193-8ac3-3e4d95d74430 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.996249527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260784996227564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd4cd1bd-c0dc-4193-8ac3-3e4d95d74430 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.996836647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23661a93-7f73-4a07-923e-834a985becc6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.996922877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23661a93-7f73-4a07-923e-834a985becc6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:46:24 old-k8s-version-924039 crio[651]: time="2024-07-29 13:46:24.996980820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=23661a93-7f73-4a07-923e-834a985becc6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 13:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050569] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048582] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul29 13:38] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.901895] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.671429] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000011] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.092860] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.061256] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065965] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.189582] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.150988] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.251542] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.656340] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.075950] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.028528] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +9.845046] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 13:42] systemd-fstab-generator[5014]: Ignoring "noauto" option for root device
	[Jul29 13:44] systemd-fstab-generator[5301]: Ignoring "noauto" option for root device
	[  +0.070564] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:46:25 up 8 min,  0 users,  load average: 0.01, 0.06, 0.03
	Linux old-k8s-version-924039 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]: net.(*sysDialer).dialSerial(0xc000019800, 0x4f7fe40, 0xc0009bb6e0, 0xc000993450, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]:         /usr/local/go/src/net/dial.go:548 +0x152
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]: net.(*Dialer).DialContext(0xc0001f36e0, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc00097f650, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0007665c0, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc00097f650, 0x24, 0x60, 0x7fe742e48488, 0x118, ...)
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]: net/http.(*Transport).dial(0xc000683cc0, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc00097f650, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]: net/http.(*Transport).dialConn(0xc000683cc0, 0x4f7fe00, 0xc000122018, 0x0, 0xc0009dc3c0, 0x5, 0xc00097f650, 0x24, 0x0, 0xc00093afc0, ...)
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]: net/http.(*Transport).dialConnFor(0xc000683cc0, 0xc000937290)
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]: created by net/http.(*Transport).queueForDial
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5484]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 29 13:46:22 old-k8s-version-924039 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 13:46:22 old-k8s-version-924039 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 13:46:22 old-k8s-version-924039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 29 13:46:22 old-k8s-version-924039 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 13:46:22 old-k8s-version-924039 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5541]: I0729 13:46:22.965075    5541 server.go:416] Version: v1.20.0
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5541]: I0729 13:46:22.965317    5541 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5541]: I0729 13:46:22.967365    5541 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5541]: W0729 13:46:22.968306    5541 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 13:46:22 old-k8s-version-924039 kubelet[5541]: I0729 13:46:22.968665    5541 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-924039 -n old-k8s-version-924039
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 2 (231.925931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-924039" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (735.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 13:43:05.411941  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-566777 -n no-preload-566777
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 13:51:29.952404379 +0000 UTC m=+6568.215759192
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-566777 -n no-preload-566777
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-566777 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-566777 logs -n 25: (2.125118147s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-507612 sudo cat                              | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo find                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo crio                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-507612                                       | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-312895 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | disable-driver-mounts-312895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:30 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-135920            | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-566777             | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-972693  | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-135920                 | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-566777                  | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-924039        | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-566777 --memory=2200                     | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-972693       | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC | 29 Jul 24 13:43 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-924039             | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:34:10
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:34:10.969228  301425 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:34:10.969348  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969356  301425 out.go:304] Setting ErrFile to fd 2...
	I0729 13:34:10.969360  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969506  301425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:34:10.970007  301425 out.go:298] Setting JSON to false
	I0729 13:34:10.970908  301425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11794,"bootTime":1722248257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:34:10.970971  301425 start.go:139] virtualization: kvm guest
	I0729 13:34:10.973245  301425 out.go:177] * [old-k8s-version-924039] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:34:10.974804  301425 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:34:10.974803  301425 notify.go:220] Checking for updates...
	I0729 13:34:10.977011  301425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:34:10.978270  301425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:34:10.979473  301425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:34:10.980743  301425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:34:10.981923  301425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:34:10.983514  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:34:10.983962  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:10.984049  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:10.998985  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0729 13:34:10.999407  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:10.999928  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:10.999951  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.000306  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.000497  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.002455  301425 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 13:34:11.003702  301425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:34:11.003997  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:11.004037  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:11.018707  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I0729 13:34:11.019177  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:11.019653  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:11.019676  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.019968  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.020126  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.055819  301425 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:34:11.057085  301425 start.go:297] selected driver: kvm2
	I0729 13:34:11.057104  301425 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.057242  301425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:34:11.057967  301425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.058029  301425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:34:11.073706  301425 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:34:11.074089  301425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:34:11.074169  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:34:11.074188  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:34:11.074240  301425 start.go:340] cluster config:
	{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.074366  301425 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.076296  301425 out.go:177] * Starting "old-k8s-version-924039" primary control-plane node in "old-k8s-version-924039" cluster
	I0729 13:34:09.149068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:11.077828  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:34:11.077869  301425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:34:11.077879  301425 cache.go:56] Caching tarball of preloaded images
	I0729 13:34:11.077959  301425 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:34:11.077970  301425 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 13:34:11.078069  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:34:11.078241  301425 start.go:360] acquireMachinesLock for old-k8s-version-924039: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:34:15.229067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:18.301058  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:24.381104  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:27.453064  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:33.533067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:36.605120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:42.685075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:45.757111  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:51.837033  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:54.909068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:00.989073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:04.061125  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:10.141082  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:13.213123  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:19.293109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:22.365061  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:28.445075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:31.517094  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:37.597080  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:40.669073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:46.749070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:49.821083  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:55.901013  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:58.973149  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:05.053098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:08.125109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:14.205093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:17.277093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:23.357105  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:26.429122  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:32.509070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:35.581107  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:41.661120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:44.733129  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:50.813085  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:53.885117  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:59.965073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:03.037079  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:09.117098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:12.189049  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:15.193505  300746 start.go:364] duration metric: took 4m36.683808785s to acquireMachinesLock for "no-preload-566777"
	I0729 13:37:15.193569  300746 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:15.193577  300746 fix.go:54] fixHost starting: 
	I0729 13:37:15.193937  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:15.193976  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:15.209623  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0729 13:37:15.210158  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:15.210625  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:37:15.210646  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:15.211001  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:15.211265  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:15.211468  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:37:15.213144  300746 fix.go:112] recreateIfNeeded on no-preload-566777: state=Stopped err=<nil>
	I0729 13:37:15.213185  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	W0729 13:37:15.213349  300746 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:15.215474  300746 out.go:177] * Restarting existing kvm2 VM for "no-preload-566777" ...
	I0729 13:37:15.190804  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:15.190850  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191224  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:37:15.191257  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191494  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:37:15.193354  300705 machine.go:97] duration metric: took 4m37.425774293s to provisionDockerMachine
	I0729 13:37:15.193407  300705 fix.go:56] duration metric: took 4m37.447841932s for fixHost
	I0729 13:37:15.193419  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 4m37.447869212s
	W0729 13:37:15.193447  300705 start.go:714] error starting host: provision: host is not running
	W0729 13:37:15.193569  300705 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 13:37:15.193581  300705 start.go:729] Will try again in 5 seconds ...
	I0729 13:37:15.216957  300746 main.go:141] libmachine: (no-preload-566777) Calling .Start
	I0729 13:37:15.217120  300746 main.go:141] libmachine: (no-preload-566777) Ensuring networks are active...
	I0729 13:37:15.217761  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network default is active
	I0729 13:37:15.218067  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network mk-no-preload-566777 is active
	I0729 13:37:15.218451  300746 main.go:141] libmachine: (no-preload-566777) Getting domain xml...
	I0729 13:37:15.219134  300746 main.go:141] libmachine: (no-preload-566777) Creating domain...
	I0729 13:37:16.412301  300746 main.go:141] libmachine: (no-preload-566777) Waiting to get IP...
	I0729 13:37:16.413162  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.413576  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.413670  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.413557  302040 retry.go:31] will retry after 233.512145ms: waiting for machine to come up
	I0729 13:37:16.649335  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.649921  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.649945  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.649885  302040 retry.go:31] will retry after 328.846738ms: waiting for machine to come up
	I0729 13:37:16.980566  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.980976  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.981022  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.980926  302040 retry.go:31] will retry after 329.69915ms: waiting for machine to come up
	I0729 13:37:17.312547  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.312948  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.312977  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.312906  302040 retry.go:31] will retry after 418.810733ms: waiting for machine to come up
	I0729 13:37:17.733615  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.734042  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.734065  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.734009  302040 retry.go:31] will retry after 694.191211ms: waiting for machine to come up
	I0729 13:37:20.196079  300705 start.go:360] acquireMachinesLock for embed-certs-135920: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:37:18.429670  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:18.430024  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:18.430055  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:18.429973  302040 retry.go:31] will retry after 857.66396ms: waiting for machine to come up
	I0729 13:37:19.289078  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:19.289491  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:19.289521  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:19.289458  302040 retry.go:31] will retry after 994.340261ms: waiting for machine to come up
	I0729 13:37:20.285875  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:20.286308  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:20.286340  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:20.286263  302040 retry.go:31] will retry after 1.052380852s: waiting for machine to come up
	I0729 13:37:21.340435  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:21.340775  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:21.340821  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:21.340743  302040 retry.go:31] will retry after 1.429700498s: waiting for machine to come up
	I0729 13:37:22.772362  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:22.772754  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:22.772782  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:22.772700  302040 retry.go:31] will retry after 1.702185495s: waiting for machine to come up
	I0729 13:37:24.477636  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:24.478074  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:24.478106  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:24.478003  302040 retry.go:31] will retry after 2.649912402s: waiting for machine to come up
	I0729 13:37:27.129797  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:27.130212  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:27.130243  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:27.130159  302040 retry.go:31] will retry after 3.079887428s: waiting for machine to come up
	I0729 13:37:30.213431  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:30.213918  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:30.213958  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:30.213875  302040 retry.go:31] will retry after 3.08003223s: waiting for machine to come up
	I0729 13:37:33.297139  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.297604  300746 main.go:141] libmachine: (no-preload-566777) Found IP for machine: 192.168.61.84
	I0729 13:37:33.297627  300746 main.go:141] libmachine: (no-preload-566777) Reserving static IP address...
	I0729 13:37:33.297639  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has current primary IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.298106  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.298146  300746 main.go:141] libmachine: (no-preload-566777) Reserved static IP address: 192.168.61.84
	I0729 13:37:33.298164  300746 main.go:141] libmachine: (no-preload-566777) DBG | skip adding static IP to network mk-no-preload-566777 - found existing host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"}
	I0729 13:37:33.298178  300746 main.go:141] libmachine: (no-preload-566777) DBG | Getting to WaitForSSH function...
	I0729 13:37:33.298194  300746 main.go:141] libmachine: (no-preload-566777) Waiting for SSH to be available...
	I0729 13:37:33.300310  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300618  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.300653  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300731  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH client type: external
	I0729 13:37:33.300773  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa (-rw-------)
	I0729 13:37:33.300826  300746 main.go:141] libmachine: (no-preload-566777) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:33.300957  300746 main.go:141] libmachine: (no-preload-566777) DBG | About to run SSH command:
	I0729 13:37:33.300985  300746 main.go:141] libmachine: (no-preload-566777) DBG | exit 0
	I0729 13:37:34.861481  301044 start.go:364] duration metric: took 4m23.064160625s to acquireMachinesLock for "default-k8s-diff-port-972693"
	I0729 13:37:34.861564  301044 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:34.861576  301044 fix.go:54] fixHost starting: 
	I0729 13:37:34.862021  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:34.862055  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:34.879106  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0729 13:37:34.879506  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:34.880050  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:37:34.880077  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:34.880423  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:34.880637  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:34.880838  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:37:34.882251  301044 fix.go:112] recreateIfNeeded on default-k8s-diff-port-972693: state=Stopped err=<nil>
	I0729 13:37:34.882284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	W0729 13:37:34.882465  301044 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:34.884611  301044 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-972693" ...
	I0729 13:37:33.420745  300746 main.go:141] libmachine: (no-preload-566777) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:33.421178  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetConfigRaw
	I0729 13:37:33.421861  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.424343  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.424680  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.424710  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.425061  300746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/config.json ...
	I0729 13:37:33.425244  300746 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:33.425262  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:33.425513  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.427708  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.427961  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.427989  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.428171  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.428354  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428528  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428672  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.428933  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.429139  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.429150  300746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:33.525027  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:33.525065  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525306  300746 buildroot.go:166] provisioning hostname "no-preload-566777"
	I0729 13:37:33.525340  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525551  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.528124  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528491  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.528529  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528677  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.528865  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529025  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529144  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.529286  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.529453  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.529465  300746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-566777 && echo "no-preload-566777" | sudo tee /etc/hostname
	I0729 13:37:33.638867  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-566777
	
	I0729 13:37:33.638902  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.641406  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641730  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.641762  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641908  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.642112  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642285  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642414  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.642555  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.642727  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.642743  300746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-566777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-566777/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-566777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:33.749760  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:33.749789  300746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:33.749812  300746 buildroot.go:174] setting up certificates
	I0729 13:37:33.749821  300746 provision.go:84] configureAuth start
	I0729 13:37:33.749831  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.750114  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.752924  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753241  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.753264  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753477  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.755385  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755681  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.755701  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755840  300746 provision.go:143] copyHostCerts
	I0729 13:37:33.755904  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:33.755926  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:33.756019  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:33.756156  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:33.756169  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:33.756206  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:33.756276  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:33.756286  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:33.756317  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:33.756380  300746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.no-preload-566777 san=[127.0.0.1 192.168.61.84 localhost minikube no-preload-566777]
	I0729 13:37:34.226953  300746 provision.go:177] copyRemoteCerts
	I0729 13:37:34.227033  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:34.227066  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.229542  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229816  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.229853  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.230177  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.230314  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.230452  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.310803  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:37:34.334545  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:37:34.357908  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:34.381163  300746 provision.go:87] duration metric: took 631.325967ms to configureAuth
	I0729 13:37:34.381200  300746 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:34.381441  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:37:34.381535  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.383985  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384286  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.384312  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384473  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.384681  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384862  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384995  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.385176  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.385393  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.385414  300746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:34.640587  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:34.640615  300746 machine.go:97] duration metric: took 1.215357318s to provisionDockerMachine
	I0729 13:37:34.640628  300746 start.go:293] postStartSetup for "no-preload-566777" (driver="kvm2")
	I0729 13:37:34.640645  300746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:34.640683  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.641067  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:34.641104  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.643711  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644066  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.644097  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644215  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.644398  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.644555  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.644677  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.723215  300746 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:34.727393  300746 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:34.727425  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:34.727507  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:34.727614  300746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:34.727770  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:34.736666  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:34.759678  300746 start.go:296] duration metric: took 119.034973ms for postStartSetup
	I0729 13:37:34.759716  300746 fix.go:56] duration metric: took 19.566140877s for fixHost
	I0729 13:37:34.759748  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.762103  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762468  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.762491  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762645  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.762843  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763008  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763111  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.763229  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.763392  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.763403  300746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:34.861306  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260254.835831305
	
	I0729 13:37:34.861333  300746 fix.go:216] guest clock: 1722260254.835831305
	I0729 13:37:34.861341  300746 fix.go:229] Guest: 2024-07-29 13:37:34.835831305 +0000 UTC Remote: 2024-07-29 13:37:34.759720831 +0000 UTC m=+296.387252495 (delta=76.110474ms)
	I0729 13:37:34.861376  300746 fix.go:200] guest clock delta is within tolerance: 76.110474ms
	I0729 13:37:34.861384  300746 start.go:83] releasing machines lock for "no-preload-566777", held for 19.66783585s
	I0729 13:37:34.861413  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.861708  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:34.864181  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864534  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.864567  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864757  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865296  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865467  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865546  300746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:34.865600  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.865726  300746 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:34.865753  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.868333  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868522  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868772  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868810  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868839  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868859  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868913  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869060  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869152  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869209  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869300  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869349  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869417  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.869551  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.970978  300746 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:34.978226  300746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:35.128653  300746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:35.134619  300746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:35.134688  300746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:35.150674  300746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:35.150697  300746 start.go:495] detecting cgroup driver to use...
	I0729 13:37:35.150762  300746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:35.166545  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:35.178859  300746 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:35.178913  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:35.197133  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:35.214430  300746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:35.337707  300746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:35.467057  300746 docker.go:233] disabling docker service ...
	I0729 13:37:35.467134  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:35.480960  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:35.493850  300746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:35.629455  300746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:35.741534  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:35.754886  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:35.773243  300746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 13:37:35.773323  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.783589  300746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:35.783673  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.794150  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.805389  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.816636  300746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:35.828027  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.838467  300746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.856470  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.866773  300746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:35.876110  300746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:35.876175  300746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:35.889768  300746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:35.909971  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:36.046023  300746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:36.192169  300746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:36.192238  300746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:36.197281  300746 start.go:563] Will wait 60s for crictl version
	I0729 13:37:36.197365  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.201359  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:36.248317  300746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:36.248420  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.276247  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.306549  300746 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 13:37:34.885944  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Start
	I0729 13:37:34.886114  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring networks are active...
	I0729 13:37:34.886856  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network default is active
	I0729 13:37:34.887211  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network mk-default-k8s-diff-port-972693 is active
	I0729 13:37:34.887684  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Getting domain xml...
	I0729 13:37:34.888427  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Creating domain...
	I0729 13:37:36.147265  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting to get IP...
	I0729 13:37:36.148095  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148547  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148616  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.148516  302181 retry.go:31] will retry after 191.117257ms: waiting for machine to come up
	I0729 13:37:36.340984  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341507  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.341444  302181 retry.go:31] will retry after 285.557329ms: waiting for machine to come up
	I0729 13:37:36.629066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629670  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.629621  302181 retry.go:31] will retry after 397.294163ms: waiting for machine to come up
	I0729 13:37:36.307930  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:36.311057  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311389  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:36.311417  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311699  300746 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:36.316257  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:36.330109  300746 kubeadm.go:883] updating cluster {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:36.330268  300746 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 13:37:36.330320  300746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:36.367218  300746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 13:37:36.367250  300746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:37:36.367327  300746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.367333  300746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.367394  300746 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 13:37:36.367404  300746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.367432  300746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.367353  300746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.367412  300746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.367743  300746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.369020  300746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.369125  300746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.369150  300746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.369203  300746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.369015  300746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.369484  300746 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 13:37:36.369609  300746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.369763  300746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.560256  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.600945  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.604476  300746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 13:37:36.604539  300746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.604592  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.606566  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 13:37:36.649109  300746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 13:37:36.649210  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.649212  300746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.649328  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.696863  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.698623  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.713816  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.727059  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.764110  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.764204  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.764208  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.784479  300746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 13:37:36.784542  300746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.784558  300746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 13:37:36.784597  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.784598  300746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.784694  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.813445  300746 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 13:37:36.813491  300746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.813544  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.825275  300746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 13:37:36.825290  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 13:37:36.825392  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825463  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825327  300746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.825515  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.852786  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.852866  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:36.852822  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.852843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.852984  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:37.587824  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:37.028009  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028349  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028378  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.028295  302181 retry.go:31] will retry after 507.597159ms: waiting for machine to come up
	I0729 13:37:37.538138  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538550  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538581  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.538507  302181 retry.go:31] will retry after 508.855087ms: waiting for machine to come up
	I0729 13:37:38.049628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050241  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050277  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.050198  302181 retry.go:31] will retry after 889.089993ms: waiting for machine to come up
	I0729 13:37:38.940541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941096  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.941009  302181 retry.go:31] will retry after 891.889885ms: waiting for machine to come up
	I0729 13:37:39.834956  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835395  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:39.835341  302181 retry.go:31] will retry after 1.030799215s: waiting for machine to come up
	I0729 13:37:40.867814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868336  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868367  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:40.868283  302181 retry.go:31] will retry after 1.40369357s: waiting for machine to come up
	I0729 13:37:38.870850  300746 ssh_runner.go:235] Completed: which crictl: (2.045307778s)
	I0729 13:37:38.870925  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:38.870921  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.045429354s)
	I0729 13:37:38.870946  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 13:37:38.871001  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0: (2.018116939s)
	I0729 13:37:38.871024  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.01808875s)
	I0729 13:37:38.871054  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871083  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.018080011s)
	I0729 13:37:38.871109  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 13:37:38.871120  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871056  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 13:37:38.871166  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871151  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871234  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0: (2.018278547s)
	I0729 13:37:38.871247  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:38.871259  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 13:37:38.871304  300746 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.283446632s)
	I0729 13:37:38.871343  300746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 13:37:38.871372  300746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:38.871406  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:38.871310  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:38.939395  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:38.939419  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 13:37:38.939532  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:40.939632  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.068434649s)
	I0729 13:37:40.939669  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 13:37:40.939693  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939702  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.068259157s)
	I0729 13:37:40.939734  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 13:37:40.939761  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939794  300746 ssh_runner.go:235] Completed: which crictl: (2.068372626s)
	I0729 13:37:40.939827  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.068564103s)
	I0729 13:37:40.939843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:40.939844  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.000295325s)
	I0729 13:37:40.939847  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 13:37:40.939856  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 13:37:40.999406  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 13:37:40.999505  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:43.015187  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.075399061s)
	I0729 13:37:43.015226  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 13:37:43.015243  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.015694914s)
	I0729 13:37:43.015259  300746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:43.015279  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 13:37:43.015313  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:42.273822  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274326  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:42.274251  302181 retry.go:31] will retry after 2.255017939s: waiting for machine to come up
	I0729 13:37:44.531432  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531845  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531873  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:44.531801  302181 retry.go:31] will retry after 2.272405743s: waiting for machine to come up
	I0729 13:37:46.401061  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.385713069s)
	I0729 13:37:46.401109  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 13:37:46.401147  300746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:46.401207  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:48.358628  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.9573934s)
	I0729 13:37:48.358659  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 13:37:48.358682  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:48.358733  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:46.806043  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806654  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806681  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:46.806599  302181 retry.go:31] will retry after 2.212726673s: waiting for machine to come up
	I0729 13:37:49.022244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022732  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022770  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:49.022677  302181 retry.go:31] will retry after 3.071460325s: waiting for machine to come up
	I0729 13:37:50.216727  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.857925776s)
	I0729 13:37:50.216769  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 13:37:50.216822  300746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.216879  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.862685  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 13:37:50.862738  300746 cache_images.go:123] Successfully loaded all cached images
	I0729 13:37:50.862746  300746 cache_images.go:92] duration metric: took 14.49548231s to LoadCachedImages
	I0729 13:37:50.862763  300746 kubeadm.go:934] updating node { 192.168.61.84 8443 v1.31.0-beta.0 crio true true} ...
	I0729 13:37:50.862924  300746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-566777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:50.863021  300746 ssh_runner.go:195] Run: crio config
	I0729 13:37:50.911526  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:50.911551  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:50.911563  300746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:50.911593  300746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.84 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-566777 NodeName:no-preload-566777 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:50.911782  300746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-566777"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:50.911856  300746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 13:37:50.922091  300746 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:50.922162  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:50.931275  300746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 13:37:50.947494  300746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 13:37:50.963108  300746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 13:37:50.979666  300746 ssh_runner.go:195] Run: grep 192.168.61.84	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:50.983215  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:50.994627  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:51.117275  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:51.134412  300746 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777 for IP: 192.168.61.84
	I0729 13:37:51.134439  300746 certs.go:194] generating shared ca certs ...
	I0729 13:37:51.134461  300746 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:51.134641  300746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:51.134692  300746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:51.134703  300746 certs.go:256] generating profile certs ...
	I0729 13:37:51.134825  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/client.key
	I0729 13:37:51.134901  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key.445c667e
	I0729 13:37:51.134962  300746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key
	I0729 13:37:51.135114  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:51.135153  300746 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:51.135166  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:51.135196  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:51.135225  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:51.135256  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:51.135309  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:51.136036  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:51.169507  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:51.201916  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:51.227860  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:51.263617  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:37:51.288105  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:37:51.314837  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:51.343892  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:37:51.367328  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:51.389470  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:51.411446  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:51.433270  300746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:51.448939  300746 ssh_runner.go:195] Run: openssl version
	I0729 13:37:51.454475  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:51.465080  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469541  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469605  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.475366  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:51.485979  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:51.496382  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500511  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500571  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.505997  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:51.516733  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:51.527637  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531754  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531797  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.537237  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:51.548006  300746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:51.552581  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:51.558414  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:51.563879  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:51.569869  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:51.575800  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:37:51.581525  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:37:51.587642  300746 kubeadm.go:392] StartCluster: {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:37:51.587777  300746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:37:51.587828  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.627118  300746 cri.go:89] found id: ""
	I0729 13:37:51.627212  300746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:37:51.637686  300746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:37:51.637711  300746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:37:51.637765  300746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:37:51.647368  300746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:37:51.648291  300746 kubeconfig.go:125] found "no-preload-566777" server: "https://192.168.61.84:8443"
	I0729 13:37:51.650296  300746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:37:51.659616  300746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.84
	I0729 13:37:51.659649  300746 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:37:51.659663  300746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:37:51.659714  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.700636  300746 cri.go:89] found id: ""
	I0729 13:37:51.700703  300746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:37:51.718225  300746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:37:51.728237  300746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:37:51.728257  300746 kubeadm.go:157] found existing configuration files:
	
	I0729 13:37:51.728303  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:37:51.738280  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:37:51.738364  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:37:51.748770  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:37:51.758572  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:37:51.758649  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:37:51.769634  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.779757  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:37:51.779827  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.790745  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:37:51.801212  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:37:51.801275  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:37:51.811706  300746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:37:51.821251  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:51.933905  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.401823  301425 start.go:364] duration metric: took 3m42.323534375s to acquireMachinesLock for "old-k8s-version-924039"
	I0729 13:37:53.401902  301425 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:53.401914  301425 fix.go:54] fixHost starting: 
	I0729 13:37:53.402310  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:53.402344  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:53.421973  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0729 13:37:53.422456  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:53.423079  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:37:53.423112  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:53.423508  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:53.423734  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:37:53.423883  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetState
	I0729 13:37:53.425687  301425 fix.go:112] recreateIfNeeded on old-k8s-version-924039: state=Stopped err=<nil>
	I0729 13:37:53.425733  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	W0729 13:37:53.425902  301425 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:53.427931  301425 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-924039" ...
	I0729 13:37:52.097443  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.097870  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Found IP for machine: 192.168.50.34
	I0729 13:37:52.097904  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserving static IP address...
	I0729 13:37:52.097923  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has current primary IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.098329  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.098357  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserved static IP address: 192.168.50.34
	I0729 13:37:52.098377  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | skip adding static IP to network mk-default-k8s-diff-port-972693 - found existing host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"}
	I0729 13:37:52.098406  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for SSH to be available...
	I0729 13:37:52.098423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Getting to WaitForSSH function...
	I0729 13:37:52.100530  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.100878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.100908  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.101029  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH client type: external
	I0729 13:37:52.101062  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa (-rw-------)
	I0729 13:37:52.101106  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:52.101121  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | About to run SSH command:
	I0729 13:37:52.101145  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | exit 0
	I0729 13:37:52.225041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:52.225381  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetConfigRaw
	I0729 13:37:52.226001  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.228722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229109  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.229140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229315  301044 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/config.json ...
	I0729 13:37:52.229522  301044 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:52.229541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:52.229716  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.231823  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.232181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.232446  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232613  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232758  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.232913  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.233100  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.233111  301044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:52.336948  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:52.336978  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337288  301044 buildroot.go:166] provisioning hostname "default-k8s-diff-port-972693"
	I0729 13:37:52.337321  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337552  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.340284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340598  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.340623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340724  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.340913  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341090  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341261  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.341419  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.341591  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.341603  301044 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-972693 && echo "default-k8s-diff-port-972693" | sudo tee /etc/hostname
	I0729 13:37:52.455264  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-972693
	
	I0729 13:37:52.455294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.457937  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458304  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.458332  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458465  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.458667  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458857  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458995  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.459170  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.459352  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.459376  301044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-972693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-972693/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-972693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:52.570543  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:52.570578  301044 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:52.570603  301044 buildroot.go:174] setting up certificates
	I0729 13:37:52.570617  301044 provision.go:84] configureAuth start
	I0729 13:37:52.570628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.570900  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.573309  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573609  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.573641  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573751  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.575826  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.576177  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576344  301044 provision.go:143] copyHostCerts
	I0729 13:37:52.576414  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:52.576483  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:52.576568  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:52.576698  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:52.576707  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:52.576728  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:52.576786  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:52.576815  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:52.576845  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:52.576902  301044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-972693 san=[127.0.0.1 192.168.50.34 default-k8s-diff-port-972693 localhost minikube]
	I0729 13:37:52.764928  301044 provision.go:177] copyRemoteCerts
	I0729 13:37:52.764988  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:52.765018  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.767540  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.767842  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.767872  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.768041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.768213  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.768362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.768474  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:52.847615  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:52.877666  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 13:37:52.901219  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:37:52.924922  301044 provision.go:87] duration metric: took 354.279838ms to configureAuth
	I0729 13:37:52.924953  301044 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:52.925157  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:37:52.925244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.927791  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.928181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.928533  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928830  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.928978  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.929208  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.929230  301044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:53.176359  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:53.176391  301044 machine.go:97] duration metric: took 946.853063ms to provisionDockerMachine
	I0729 13:37:53.176404  301044 start.go:293] postStartSetup for "default-k8s-diff-port-972693" (driver="kvm2")
	I0729 13:37:53.176419  301044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:53.176441  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.176782  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:53.176818  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.179340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.179698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179858  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.180053  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.180214  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.180336  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.259826  301044 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:53.264059  301044 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:53.264087  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:53.264155  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:53.264239  301044 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:53.264345  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:53.273954  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:53.297340  301044 start.go:296] duration metric: took 120.913486ms for postStartSetup
	I0729 13:37:53.297392  301044 fix.go:56] duration metric: took 18.435815853s for fixHost
	I0729 13:37:53.297421  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.299859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300187  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.300218  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.300576  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300755  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300932  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.301116  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:53.301314  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:53.301324  301044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:53.401628  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260273.369344581
	
	I0729 13:37:53.401671  301044 fix.go:216] guest clock: 1722260273.369344581
	I0729 13:37:53.401682  301044 fix.go:229] Guest: 2024-07-29 13:37:53.369344581 +0000 UTC Remote: 2024-07-29 13:37:53.297397345 +0000 UTC m=+281.644280810 (delta=71.947236ms)
	I0729 13:37:53.401705  301044 fix.go:200] guest clock delta is within tolerance: 71.947236ms
	I0729 13:37:53.401711  301044 start.go:83] releasing machines lock for "default-k8s-diff-port-972693", held for 18.540175489s
	I0729 13:37:53.401760  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.402061  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:53.404813  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405182  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.405207  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405359  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.405844  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406153  301044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:53.406210  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.406289  301044 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:53.406315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.409060  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409351  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409460  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.409814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.409878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409909  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409992  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410092  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.410183  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.410315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.410435  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410631  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.510289  301044 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:53.517635  301044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:53.660575  301044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:53.668128  301044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:53.668207  301044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:53.690732  301044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:53.690764  301044 start.go:495] detecting cgroup driver to use...
	I0729 13:37:53.690838  301044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:53.707461  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:53.721922  301044 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:53.722004  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:53.740941  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:53.759323  301044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:53.900344  301044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:54.065647  301044 docker.go:233] disabling docker service ...
	I0729 13:37:54.065780  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:54.082468  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:54.098283  301044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:54.213104  301044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:54.339560  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:54.360412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:54.384836  301044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:37:54.384900  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.400889  301044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:54.400980  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.416941  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.433090  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.449306  301044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:54.461742  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.477135  301044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.501431  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.519646  301044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:54.532995  301044 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:54.533074  301044 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:54.550639  301044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:54.561896  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:54.710789  301044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:54.885480  301044 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:54.885558  301044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:54.890556  301044 start.go:563] Will wait 60s for crictl version
	I0729 13:37:54.890629  301044 ssh_runner.go:195] Run: which crictl
	I0729 13:37:54.894644  301044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:54.941141  301044 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:54.941236  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:54.983380  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:55.027770  301044 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:37:53.429298  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .Start
	I0729 13:37:53.429471  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring networks are active...
	I0729 13:37:53.430263  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network default is active
	I0729 13:37:53.430649  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network mk-old-k8s-version-924039 is active
	I0729 13:37:53.431011  301425 main.go:141] libmachine: (old-k8s-version-924039) Getting domain xml...
	I0729 13:37:53.431825  301425 main.go:141] libmachine: (old-k8s-version-924039) Creating domain...
	I0729 13:37:54.749878  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting to get IP...
	I0729 13:37:54.751148  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.751716  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.751784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.751696  302377 retry.go:31] will retry after 230.330776ms: waiting for machine to come up
	I0729 13:37:54.984551  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.985138  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.985183  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.985094  302377 retry.go:31] will retry after 291.000555ms: waiting for machine to come up
	I0729 13:37:55.277730  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.278199  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.278220  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.278152  302377 retry.go:31] will retry after 360.474919ms: waiting for machine to come up
	I0729 13:37:55.640675  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.641255  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.641288  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.641207  302377 retry.go:31] will retry after 480.424143ms: waiting for machine to come up
	I0729 13:37:55.029239  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:55.032722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033225  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:55.033257  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033668  301044 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:55.038429  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:55.056198  301044 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:55.056373  301044 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:37:55.056440  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:55.100534  301044 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:37:55.100612  301044 ssh_runner.go:195] Run: which lz4
	I0729 13:37:55.105708  301044 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:37:55.110384  301044 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:37:55.110417  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:37:56.630726  301044 crio.go:462] duration metric: took 1.525047583s to copy over tarball
	I0729 13:37:56.630816  301044 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:37:53.446825  300746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.51288234s)
	I0729 13:37:53.446866  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.663105  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.740482  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.823641  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:37:53.823753  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.324001  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.824299  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.933931  300746 api_server.go:72] duration metric: took 1.11028623s to wait for apiserver process to appear ...
	I0729 13:37:54.933969  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:37:54.933996  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:54.934563  300746 api_server.go:269] stopped: https://192.168.61.84:8443/healthz: Get "https://192.168.61.84:8443/healthz": dial tcp 192.168.61.84:8443: connect: connection refused
	I0729 13:37:55.434598  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.005676  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.005719  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.005737  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.066371  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.066408  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.434268  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.439205  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.439240  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:58.934796  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.944368  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.944399  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.434576  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.443061  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:59.443098  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.934805  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.943892  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:37:59.955156  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:37:59.955185  300746 api_server.go:131] duration metric: took 5.021207326s to wait for apiserver health ...
	I0729 13:37:59.955197  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.955205  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:00.307264  300746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:37:56.123854  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.124460  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.124487  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.124433  302377 retry.go:31] will retry after 529.614291ms: waiting for machine to come up
	I0729 13:37:56.656136  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.656626  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.656657  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.656599  302377 retry.go:31] will retry after 794.429248ms: waiting for machine to come up
	I0729 13:37:57.452523  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:57.453001  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:57.453033  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:57.452952  302377 retry.go:31] will retry after 1.140583184s: waiting for machine to come up
	I0729 13:37:58.594636  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:58.595067  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:58.595109  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:58.595024  302377 retry.go:31] will retry after 894.563974ms: waiting for machine to come up
	I0729 13:37:59.491447  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:59.492094  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:59.492120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:59.491993  302377 retry.go:31] will retry after 1.145531829s: waiting for machine to come up
	I0729 13:38:00.639387  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:00.639807  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:00.639838  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:00.639754  302377 retry.go:31] will retry after 1.949675091s: waiting for machine to come up
	I0729 13:37:58.983188  301044 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.352336314s)
	I0729 13:37:58.983233  301044 crio.go:469] duration metric: took 2.352468802s to extract the tarball
	I0729 13:37:58.983245  301044 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:37:59.022539  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:59.086881  301044 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:37:59.086913  301044 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:37:59.086924  301044 kubeadm.go:934] updating node { 192.168.50.34 8444 v1.30.3 crio true true} ...
	I0729 13:37:59.087062  301044 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-972693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:59.087158  301044 ssh_runner.go:195] Run: crio config
	I0729 13:37:59.144128  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.144163  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:59.144182  301044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:59.144209  301044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.34 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-972693 NodeName:default-k8s-diff-port-972693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:59.144376  301044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.34
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-972693"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:59.144452  301044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:37:59.154648  301044 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:59.154717  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:59.164572  301044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0729 13:37:59.182967  301044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:37:59.202507  301044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0729 13:37:59.221603  301044 ssh_runner.go:195] Run: grep 192.168.50.34	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:59.226646  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:59.244199  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:59.390312  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:59.411152  301044 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693 for IP: 192.168.50.34
	I0729 13:37:59.411178  301044 certs.go:194] generating shared ca certs ...
	I0729 13:37:59.411213  301044 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:59.411421  301044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:59.411481  301044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:59.411495  301044 certs.go:256] generating profile certs ...
	I0729 13:37:59.411614  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/client.key
	I0729 13:37:59.411709  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key.0cff1f82
	I0729 13:37:59.411780  301044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key
	I0729 13:37:59.411977  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:59.412036  301044 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:59.412052  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:59.412090  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:59.412124  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:59.412156  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:59.412221  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:59.413262  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:59.450186  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:59.496339  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:59.535462  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:59.569433  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 13:37:59.602826  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:37:59.639581  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:59.672966  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:37:59.707007  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:59.741894  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:59.771364  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:59.802928  301044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:59.828730  301044 ssh_runner.go:195] Run: openssl version
	I0729 13:37:59.837356  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:59.855071  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861707  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861781  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.870815  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:59.884842  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:59.899473  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904238  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904312  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.910221  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:59.923542  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:59.936729  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943440  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943496  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.951099  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:59.964578  301044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:59.969476  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:59.975715  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:59.981719  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:59.987788  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:59.993753  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:00.000228  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:00.007898  301044 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:00.008033  301044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:00.008091  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.054999  301044 cri.go:89] found id: ""
	I0729 13:38:00.055097  301044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:00.069066  301044 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:00.069090  301044 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:00.069148  301044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:00.083486  301044 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:00.084538  301044 kubeconfig.go:125] found "default-k8s-diff-port-972693" server: "https://192.168.50.34:8444"
	I0729 13:38:00.086623  301044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:00.099514  301044 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.34
	I0729 13:38:00.099555  301044 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:00.099570  301044 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:00.099644  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.137643  301044 cri.go:89] found id: ""
	I0729 13:38:00.137726  301044 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:00.157036  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:00.168591  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:00.168614  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:00.168664  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:38:00.178379  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:00.178449  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:00.189688  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:38:00.199323  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:00.199388  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:00.209351  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.219100  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:00.219171  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.228754  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:38:00.238453  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:00.238526  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:00.248479  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:00.258717  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.377121  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.413128  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:00.424610  300746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:00.446537  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:01.601214  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:01.601265  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:01.601278  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:01.601296  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:01.601305  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:01.601312  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:38:01.601323  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:01.601332  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:01.601346  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:01.601357  300746 system_pods.go:74] duration metric: took 1.154789909s to wait for pod list to return data ...
	I0729 13:38:01.601370  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:02.057111  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:02.057149  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:02.057182  300746 node_conditions.go:105] duration metric: took 455.806302ms to run NodePressure ...
	I0729 13:38:02.057210  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.420014  300746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426444  300746 kubeadm.go:739] kubelet initialised
	I0729 13:38:02.426467  300746 kubeadm.go:740] duration metric: took 6.420611ms waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426478  300746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:02.431168  300746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.436892  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436916  300746 pod_ready.go:81] duration metric: took 5.728016ms for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.436925  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436932  300746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.443079  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443102  300746 pod_ready.go:81] duration metric: took 6.163444ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.443110  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443115  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.447945  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447964  300746 pod_ready.go:81] duration metric: took 4.843364ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.447973  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447980  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.457004  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457027  300746 pod_ready.go:81] duration metric: took 9.037058ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.457038  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457045  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.825208  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825246  300746 pod_ready.go:81] duration metric: took 368.180356ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.825259  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825268  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.225868  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.225975  300746 pod_ready.go:81] duration metric: took 400.697293ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.225993  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.226003  300746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.627568  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627605  300746 pod_ready.go:81] duration metric: took 401.589314ms for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.627618  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627628  300746 pod_ready.go:38] duration metric: took 1.201138036s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:03.627651  300746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:03.646855  300746 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:03.646893  300746 kubeadm.go:597] duration metric: took 12.009173344s to restartPrimaryControlPlane
	I0729 13:38:03.646910  300746 kubeadm.go:394] duration metric: took 12.059279913s to StartCluster
	I0729 13:38:03.646936  300746 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.647029  300746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:03.649213  300746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.649527  300746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:03.649810  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:38:03.649861  300746 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:03.649931  300746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-566777"
	I0729 13:38:03.649962  300746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-566777"
	W0729 13:38:03.649974  300746 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:03.650021  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650400  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.650428  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.650493  300746 addons.go:69] Setting default-storageclass=true in profile "no-preload-566777"
	I0729 13:38:03.650533  300746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-566777"
	I0729 13:38:03.650601  300746 addons.go:69] Setting metrics-server=true in profile "no-preload-566777"
	I0729 13:38:03.650631  300746 addons.go:234] Setting addon metrics-server=true in "no-preload-566777"
	W0729 13:38:03.650642  300746 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:03.650675  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650985  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651014  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651029  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651054  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651324  300746 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:03.652887  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:03.670088  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0729 13:38:03.670283  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0729 13:38:03.670694  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.670769  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.671418  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671423  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671437  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671440  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671755  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0729 13:38:03.671900  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.671927  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.672491  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.672515  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.672711  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.673183  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.673207  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.673468  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.673480  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.673857  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.674012  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.677726  300746 addons.go:234] Setting addon default-storageclass=true in "no-preload-566777"
	W0729 13:38:03.677746  300746 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:03.677777  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.678133  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.678151  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.692817  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0729 13:38:03.693446  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.693919  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.693945  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.694335  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.694504  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.694718  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0729 13:38:03.695225  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.695726  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.695744  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.696028  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.696154  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.696514  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.697635  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.698597  300746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:03.699466  300746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:03.700447  300746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:03.700463  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:03.700481  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.701375  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:03.701390  300746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:03.701404  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.705199  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705225  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705844  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705866  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705893  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705911  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705946  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706143  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706313  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.706471  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.706755  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.708988  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.710193  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I0729 13:38:03.710735  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.711282  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.711296  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.711684  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.712271  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.712322  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.712966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.713103  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.756710  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I0729 13:38:03.757254  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.757760  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.757784  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.758125  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.758376  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.760315  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.760577  300746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:03.760594  300746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:03.760612  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.763679  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.764208  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.764277  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.765045  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.765227  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.765386  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.765546  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.883257  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:03.905104  300746 node_ready.go:35] waiting up to 6m0s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:03.985382  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:03.985412  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:04.014094  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:04.014119  300746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:04.016390  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:04.047695  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:04.062249  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:04.062328  300746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:04.095999  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:05.473341  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4569173s)
	I0729 13:38:05.473396  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473409  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.473421  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.425688075s)
	I0729 13:38:05.473547  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473558  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474089  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.474117  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474129  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474133  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474137  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474142  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474158  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474148  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474213  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.475707  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.475738  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.475746  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.476002  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.476095  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.476124  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.490038  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.490081  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.490420  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.490440  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562064  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46596112s)
	I0729 13:38:05.562122  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562136  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.562492  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.562516  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562532  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562541  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.564397  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.564410  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.564448  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.564471  300746 addons.go:475] Verifying addon metrics-server=true in "no-preload-566777"
	I0729 13:38:05.566888  300746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:38:02.590640  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:02.591134  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:02.591162  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:02.591087  302377 retry.go:31] will retry after 1.765945358s: waiting for machine to come up
	I0729 13:38:04.358332  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:04.358934  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:04.358963  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:04.358899  302377 retry.go:31] will retry after 2.923224015s: waiting for machine to come up
	I0729 13:38:01.713425  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.33625836s)
	I0729 13:38:01.713462  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:01.941164  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.017707  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.134991  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:02.135105  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:02.636248  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.135563  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.264470  301044 api_server.go:72] duration metric: took 1.129485078s to wait for apiserver process to appear ...
	I0729 13:38:03.264512  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:03.264545  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.392570  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.392609  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.392626  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.423076  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.423120  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.764837  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.770393  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:06.770428  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.264879  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.269632  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:07.269670  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.764878  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.770291  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:38:07.781660  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:07.781691  301044 api_server.go:131] duration metric: took 4.517171532s to wait for apiserver health ...
	I0729 13:38:07.781700  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:38:07.781707  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:07.784769  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:05.568441  300746 addons.go:510] duration metric: took 1.918571396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:38:05.916109  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:07.284234  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:07.284764  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:07.284819  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:07.284694  302377 retry.go:31] will retry after 2.9786525s: waiting for machine to come up
	I0729 13:38:10.265771  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:10.266128  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:10.266161  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:10.266077  302377 retry.go:31] will retry after 5.044155966s: waiting for machine to come up
	I0729 13:38:07.786038  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:07.824838  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:07.850139  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:07.862900  301044 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:07.862932  301044 system_pods.go:61] "coredns-7db6d8ff4d-zllk5" [3ebb659a-7849-498b-a81c-54f75c8e1536] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:07.862943  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [fc5c7286-5cd4-4eeb-879e-6263f82c4164] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:07.862950  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [a3a13c0b-844d-4a5b-93a0-fb9784b4b095] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:07.862957  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4e6c469d-b2a5-4ec2-95a4-01b6ad7de347] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:07.862964  301044 system_pods.go:61] "kube-proxy-6hxkb" [42b01d8b-9a37-40d0-ac32-09e3e261f953] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:07.862979  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [2373a650-57bb-4dc3-96ab-7f6cd040c148] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:07.862985  301044 system_pods.go:61] "metrics-server-569cc877fc-dlrjb" [360087fa-273d-4ba8-a299-54678724c45e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:07.862990  301044 system_pods.go:61] "storage-provisioner" [3e3fb5ef-6761-4671-a093-8616241cd98f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:07.862996  301044 system_pods.go:74] duration metric: took 12.833023ms to wait for pod list to return data ...
	I0729 13:38:07.863007  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:07.868359  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:07.868385  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:07.868395  301044 node_conditions.go:105] duration metric: took 5.383164ms to run NodePressure ...
	I0729 13:38:07.868412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:08.166890  301044 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175546  301044 kubeadm.go:739] kubelet initialised
	I0729 13:38:08.175570  301044 kubeadm.go:740] duration metric: took 8.646638ms waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175588  301044 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.186944  301044 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.194446  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194479  301044 pod_ready.go:81] duration metric: took 7.500494ms for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.194487  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194495  301044 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.202341  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202366  301044 pod_ready.go:81] duration metric: took 7.863125ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.202380  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202388  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.209017  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209041  301044 pod_ready.go:81] duration metric: took 6.646309ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.209051  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209057  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.256503  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256530  301044 pod_ready.go:81] duration metric: took 47.465005ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.256543  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256552  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652875  301044 pod_ready.go:92] pod "kube-proxy-6hxkb" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:08.652901  301044 pod_ready.go:81] duration metric: took 396.340654ms for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652912  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.658352  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:08.411629  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:08.908602  300746 node_ready.go:49] node "no-preload-566777" has status "Ready":"True"
	I0729 13:38:08.908629  300746 node_ready.go:38] duration metric: took 5.003487604s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:08.908639  300746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.914468  300746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.921796  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.313102  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313621  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has current primary IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313650  301425 main.go:141] libmachine: (old-k8s-version-924039) Found IP for machine: 192.168.39.227
	I0729 13:38:15.313665  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserving static IP address...
	I0729 13:38:15.314120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.314168  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | skip adding static IP to network mk-old-k8s-version-924039 - found existing host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"}
	I0729 13:38:15.314187  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserved static IP address: 192.168.39.227
	I0729 13:38:15.314205  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting for SSH to be available...
	I0729 13:38:15.314219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Getting to WaitForSSH function...
	I0729 13:38:15.316468  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316779  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.316827  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316994  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH client type: external
	I0729 13:38:15.317013  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa (-rw-------)
	I0729 13:38:15.317042  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:15.317054  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | About to run SSH command:
	I0729 13:38:15.317076  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | exit 0
	I0729 13:38:15.444818  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:15.445203  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetConfigRaw
	I0729 13:38:15.445858  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.448296  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.448784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.448834  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.449028  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:38:15.449208  301425 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:15.449226  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:15.449469  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.451695  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452017  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.452046  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.452420  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452606  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452770  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.452945  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.453151  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.453165  301425 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:15.561558  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:15.561590  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.561859  301425 buildroot.go:166] provisioning hostname "old-k8s-version-924039"
	I0729 13:38:15.561887  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.562079  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.564776  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565116  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.565157  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565286  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.565495  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565669  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565805  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.565952  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.566129  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.566140  301425 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-924039 && echo "old-k8s-version-924039" | sudo tee /etc/hostname
	I0729 13:38:15.687712  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-924039
	
	I0729 13:38:15.687744  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.690289  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690614  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.690638  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690864  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.691104  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691290  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691463  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.691649  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.691841  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.691869  301425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-924039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-924039/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-924039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:15.814102  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:15.814140  301425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:15.814190  301425 buildroot.go:174] setting up certificates
	I0729 13:38:15.814198  301425 provision.go:84] configureAuth start
	I0729 13:38:15.814210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.814521  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.817140  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817548  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.817583  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817728  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.819957  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820307  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.820335  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820476  301425 provision.go:143] copyHostCerts
	I0729 13:38:15.820529  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:15.820539  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:15.820592  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:15.820685  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:15.820693  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:15.820713  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:15.820772  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:15.820779  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:15.820828  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:15.820909  301425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-924039 san=[127.0.0.1 192.168.39.227 localhost minikube old-k8s-version-924039]
	I0729 13:38:15.895797  301425 provision.go:177] copyRemoteCerts
	I0729 13:38:15.895866  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:15.895898  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.898774  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899173  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.899214  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899444  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.899672  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.899882  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.900048  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.606081  300705 start.go:364] duration metric: took 56.40993179s to acquireMachinesLock for "embed-certs-135920"
	I0729 13:38:16.606131  300705 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:38:16.606139  300705 fix.go:54] fixHost starting: 
	I0729 13:38:16.606611  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:16.606652  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:16.626502  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
	I0729 13:38:16.626989  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:16.627491  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:16.627511  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:16.627897  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:16.628100  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:16.628242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:16.629856  300705 fix.go:112] recreateIfNeeded on embed-certs-135920: state=Stopped err=<nil>
	I0729 13:38:16.629879  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	W0729 13:38:16.630046  300705 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:38:16.632177  300705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-135920" ...
	I0729 13:38:12.659133  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.159457  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.159792  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.159818  301044 pod_ready.go:81] duration metric: took 7.506898395s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.159827  301044 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.633625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Start
	I0729 13:38:16.633803  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring networks are active...
	I0729 13:38:16.634580  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network default is active
	I0729 13:38:16.634947  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network mk-embed-certs-135920 is active
	I0729 13:38:16.635454  300705 main.go:141] libmachine: (embed-certs-135920) Getting domain xml...
	I0729 13:38:16.636201  300705 main.go:141] libmachine: (embed-certs-135920) Creating domain...
	I0729 13:38:15.988091  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:16.019058  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 13:38:16.047266  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:16.072992  301425 provision.go:87] duration metric: took 258.777499ms to configureAuth
	I0729 13:38:16.073029  301425 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:16.073250  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:38:16.073338  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.075801  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.076219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076350  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.076560  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076750  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076972  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.077169  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.077354  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.077369  301425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:16.357614  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:16.357650  301425 machine.go:97] duration metric: took 908.424232ms to provisionDockerMachine
	I0729 13:38:16.357666  301425 start.go:293] postStartSetup for "old-k8s-version-924039" (driver="kvm2")
	I0729 13:38:16.357680  301425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:16.357706  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.358060  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:16.358089  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.360841  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361257  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.361314  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361410  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.361645  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.361821  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.361987  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.448673  301425 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:16.453435  301425 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:16.453461  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:16.453543  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:16.453638  301425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:16.453763  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:16.464185  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:16.490358  301425 start.go:296] duration metric: took 132.675687ms for postStartSetup
	I0729 13:38:16.490422  301425 fix.go:56] duration metric: took 23.088507704s for fixHost
	I0729 13:38:16.490450  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.493249  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493571  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.493612  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493781  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.494046  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494241  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494388  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.494561  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.494759  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.494769  301425 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:16.605903  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260296.583363181
	
	I0729 13:38:16.605930  301425 fix.go:216] guest clock: 1722260296.583363181
	I0729 13:38:16.605940  301425 fix.go:229] Guest: 2024-07-29 13:38:16.583363181 +0000 UTC Remote: 2024-07-29 13:38:16.490427183 +0000 UTC m=+245.556685019 (delta=92.935998ms)
	I0729 13:38:16.605967  301425 fix.go:200] guest clock delta is within tolerance: 92.935998ms
	I0729 13:38:16.605974  301425 start.go:83] releasing machines lock for "old-k8s-version-924039", held for 23.204101255s
	I0729 13:38:16.606006  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.606296  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:16.609324  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609669  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.609701  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609826  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610328  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610516  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610589  301425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:16.610673  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.610758  301425 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:16.610786  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.613356  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613639  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613689  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.613712  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613910  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614092  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.614112  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.614122  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614287  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614307  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614449  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.614496  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614635  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614771  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.719174  301425 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:16.726348  301425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:16.880130  301425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:16.886410  301425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:16.886484  301425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:16.904120  301425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:16.904151  301425 start.go:495] detecting cgroup driver to use...
	I0729 13:38:16.904222  301425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:16.927036  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:16.947380  301425 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:16.947448  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:16.964612  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:16.979266  301425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:17.108950  301425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:17.263118  301425 docker.go:233] disabling docker service ...
	I0729 13:38:17.263192  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:17.282563  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:17.299473  301425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:17.448598  301425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:17.568025  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:17.583700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:17.603159  301425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 13:38:17.603223  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.615655  301425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:17.615728  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.628639  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.640456  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.652160  301425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:17.663864  301425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:17.675293  301425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:17.675361  301425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:17.690427  301425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:17.702163  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:17.831401  301425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:17.985760  301425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:17.985851  301425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:17.990740  301425 start.go:563] Will wait 60s for crictl version
	I0729 13:38:17.990798  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:17.994741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:18.035793  301425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:18.035889  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.065036  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.097441  301425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 13:38:13.421995  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.944090  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.933596  300746 pod_ready.go:92] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.933621  300746 pod_ready.go:81] duration metric: took 8.019124005s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.933634  300746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943434  300746 pod_ready.go:92] pod "etcd-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.943465  300746 pod_ready.go:81] duration metric: took 9.816863ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943478  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952623  300746 pod_ready.go:92] pod "kube-apiserver-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.952644  300746 pod_ready.go:81] duration metric: took 9.157998ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952653  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.956989  300746 pod_ready.go:92] pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.957010  300746 pod_ready.go:81] duration metric: took 4.350015ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.957023  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962772  300746 pod_ready.go:92] pod "kube-proxy-ql6wf" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.962796  300746 pod_ready.go:81] duration metric: took 5.763769ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962807  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318604  300746 pod_ready.go:92] pod "kube-scheduler-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:17.318632  300746 pod_ready.go:81] duration metric: took 355.816982ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318642  300746 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:18.098840  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:18.102182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102629  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:18.102665  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102925  301425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:18.107544  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:18.122039  301425 kubeadm.go:883] updating cluster {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:18.122176  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:38:18.122249  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:18.169198  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:18.169279  301425 ssh_runner.go:195] Run: which lz4
	I0729 13:38:18.173861  301425 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:18.178840  301425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:18.178881  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 13:38:19.887360  301425 crio.go:462] duration metric: took 1.713549828s to copy over tarball
	I0729 13:38:19.887450  301425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:18.167033  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:20.168009  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:17.933984  300705 main.go:141] libmachine: (embed-certs-135920) Waiting to get IP...
	I0729 13:38:17.935033  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:17.935595  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:17.935652  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:17.935560  302586 retry.go:31] will retry after 195.331915ms: waiting for machine to come up
	I0729 13:38:18.133074  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.133566  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.133592  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.133513  302586 retry.go:31] will retry after 348.993714ms: waiting for machine to come up
	I0729 13:38:18.484164  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.484746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.484771  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.484703  302586 retry.go:31] will retry after 372.899167ms: waiting for machine to come up
	I0729 13:38:18.859212  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.859721  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.859746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.859672  302586 retry.go:31] will retry after 415.38859ms: waiting for machine to come up
	I0729 13:38:19.276241  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.276785  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.276816  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.276715  302586 retry.go:31] will retry after 553.262343ms: waiting for machine to come up
	I0729 13:38:19.831475  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.831994  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.832030  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.831949  302586 retry.go:31] will retry after 579.574559ms: waiting for machine to come up
	I0729 13:38:20.412838  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:20.413273  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:20.413302  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:20.413225  302586 retry.go:31] will retry after 908.712618ms: waiting for machine to come up
	I0729 13:38:21.324197  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:21.324824  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:21.324849  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:21.324723  302586 retry.go:31] will retry after 1.4226484s: waiting for machine to come up
	I0729 13:38:19.328753  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:21.330005  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.836067  301425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.948583188s)
	I0729 13:38:22.836104  301425 crio.go:469] duration metric: took 2.948710335s to extract the tarball
	I0729 13:38:22.836114  301425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:22.878370  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:22.921339  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:22.921370  301425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:38:22.921445  301425 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.921545  301425 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.921547  301425 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 13:38:22.921633  301425 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:22.921475  301425 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.921479  301425 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923052  301425 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 13:38:22.923712  301425 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.923723  301425 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923733  301425 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.923743  301425 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.923803  301425 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.923923  301425 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.923976  301425 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.079335  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.095210  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.096664  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.109172  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.111720  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.114386  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.200545  301425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 13:38:23.200629  301425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.200698  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.203884  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 13:38:23.261424  301425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 13:38:23.261500  301425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.261528  301425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 13:38:23.261561  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.261569  301425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.261610  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.267971  301425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 13:38:23.268018  301425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.268075  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317322  301425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 13:38:23.317369  301425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.317387  301425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 13:38:23.317422  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317441  301425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.317440  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.317489  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317507  301425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 13:38:23.317530  301425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 13:38:23.317551  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.317588  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.317553  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317683  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.322770  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.432764  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 13:38:23.432833  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 13:38:23.432877  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.442661  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 13:38:23.442741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 13:38:23.442785  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 13:38:23.442825  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 13:38:23.481401  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 13:38:23.484727  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 13:38:24.057020  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:24.203622  301425 cache_images.go:92] duration metric: took 1.282232497s to LoadCachedImages
	W0729 13:38:24.203724  301425 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 13:38:24.203742  301425 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.20.0 crio true true} ...
	I0729 13:38:24.203883  301425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-924039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:24.203996  301425 ssh_runner.go:195] Run: crio config
	I0729 13:38:24.274480  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:38:24.274531  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:24.274547  301425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:24.274582  301425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-924039 NodeName:old-k8s-version-924039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 13:38:24.274784  301425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-924039"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:24.274863  301425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 13:38:24.285241  301425 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:24.285333  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:24.294677  301425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0729 13:38:24.311572  301425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:24.328768  301425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 13:38:24.346849  301425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:24.351047  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:24.364302  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:24.502947  301425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:24.524583  301425 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039 for IP: 192.168.39.227
	I0729 13:38:24.524610  301425 certs.go:194] generating shared ca certs ...
	I0729 13:38:24.524626  301425 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:24.524831  301425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:24.524889  301425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:24.524908  301425 certs.go:256] generating profile certs ...
	I0729 13:38:24.525030  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.key
	I0729 13:38:24.525090  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b
	I0729 13:38:24.525143  301425 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key
	I0729 13:38:24.525300  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:24.525345  301425 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:24.525359  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:24.525390  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:24.525416  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:24.525440  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:24.525495  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:24.526416  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:24.593901  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:24.641443  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:24.679927  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:24.740839  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 13:38:24.779899  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:38:24.814327  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:24.842166  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:38:24.868619  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:24.894053  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:24.921437  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:24.947676  301425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:24.966469  301425 ssh_runner.go:195] Run: openssl version
	I0729 13:38:24.972780  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:24.985676  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990293  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990356  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.996523  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:25.007631  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:25.018369  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022779  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022840  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.028471  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:25.039307  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:25.050190  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054731  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054799  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.060568  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:25.071531  301425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:25.076195  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:25.082194  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:25.088573  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:25.095625  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:25.101900  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:25.107797  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:25.113775  301425 kubeadm.go:392] StartCluster: {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:25.113903  301425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:25.113975  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.159804  301425 cri.go:89] found id: ""
	I0729 13:38:25.159887  301425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:25.172248  301425 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:25.172271  301425 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:25.172321  301425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:25.182852  301425 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:25.184249  301425 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:25.186246  301425 kubeconfig.go:62] /home/jenkins/minikube-integration/19341-233093/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-924039" cluster setting kubeconfig missing "old-k8s-version-924039" context setting]
	I0729 13:38:25.188334  301425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:25.262355  301425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:25.274019  301425 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0729 13:38:25.274063  301425 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:25.274078  301425 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:25.274141  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.311295  301425 cri.go:89] found id: ""
	I0729 13:38:25.311365  301425 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:25.330380  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:25.343607  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:25.343651  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:25.343709  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:25.356979  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:25.357048  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:25.370453  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:25.386234  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:25.386308  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:25.403905  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.413906  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:25.414011  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.431532  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:25.448250  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:25.448325  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:25.459773  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:25.469841  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:25.584845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:22.667857  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:24.668022  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.748882  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:22.749346  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:22.749368  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:22.749292  302586 retry.go:31] will retry after 1.460248931s: waiting for machine to come up
	I0729 13:38:24.212019  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:24.212538  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:24.212567  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:24.212479  302586 retry.go:31] will retry after 1.462429402s: waiting for machine to come up
	I0729 13:38:25.676972  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:25.677407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:25.677429  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:25.677368  302586 retry.go:31] will retry after 2.551129627s: waiting for machine to come up
	I0729 13:38:23.826435  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:25.826981  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.325176  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:26.367294  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.618571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.775377  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.860948  301425 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:26.861038  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.361227  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.362003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.861172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.361165  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.861469  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.361306  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.861442  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.167961  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:29.667405  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.230763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:28.231276  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:28.231299  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:28.231239  302586 retry.go:31] will retry after 2.333059097s: waiting for machine to come up
	I0729 13:38:30.566386  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:30.566786  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:30.566815  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:30.566733  302586 retry.go:31] will retry after 3.717362174s: waiting for machine to come up
	I0729 13:38:30.326143  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:32.825635  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:31.361866  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:31.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.361776  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.862004  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.361883  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.862010  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.362013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.861958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.361390  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.861465  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.165082  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.165674  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.165885  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.288242  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288935  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has current primary IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288968  300705 main.go:141] libmachine: (embed-certs-135920) Found IP for machine: 192.168.72.207
	I0729 13:38:34.288987  300705 main.go:141] libmachine: (embed-certs-135920) Reserving static IP address...
	I0729 13:38:34.289557  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.289586  300705 main.go:141] libmachine: (embed-certs-135920) Reserved static IP address: 192.168.72.207
	I0729 13:38:34.289604  300705 main.go:141] libmachine: (embed-certs-135920) DBG | skip adding static IP to network mk-embed-certs-135920 - found existing host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"}
	I0729 13:38:34.289619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Getting to WaitForSSH function...
	I0729 13:38:34.289635  300705 main.go:141] libmachine: (embed-certs-135920) Waiting for SSH to be available...
	I0729 13:38:34.291951  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292308  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.292340  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292589  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH client type: external
	I0729 13:38:34.292619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa (-rw-------)
	I0729 13:38:34.292651  300705 main.go:141] libmachine: (embed-certs-135920) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:34.292665  300705 main.go:141] libmachine: (embed-certs-135920) DBG | About to run SSH command:
	I0729 13:38:34.292677  300705 main.go:141] libmachine: (embed-certs-135920) DBG | exit 0
	I0729 13:38:34.417738  300705 main.go:141] libmachine: (embed-certs-135920) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:34.418128  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetConfigRaw
	I0729 13:38:34.418881  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.421524  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.421875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.421911  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.422113  300705 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/config.json ...
	I0729 13:38:34.422306  300705 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:34.422325  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:34.422544  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.424658  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.425073  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425167  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.425365  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425575  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425786  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.425935  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.426155  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.426172  300705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:34.529324  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:34.529354  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529600  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:38:34.529625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.532564  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.532966  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.533001  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.533274  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.533502  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533701  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533906  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.534116  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.534339  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.534353  300705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-135920 && echo "embed-certs-135920" | sudo tee /etc/hostname
	I0729 13:38:34.651175  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-135920
	
	I0729 13:38:34.651203  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.653763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.654085  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654266  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.654460  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654647  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654838  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.655024  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.655230  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.655246  300705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-135920' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-135920/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-135920' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:34.769548  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:34.769579  300705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:34.769597  300705 buildroot.go:174] setting up certificates
	I0729 13:38:34.769605  300705 provision.go:84] configureAuth start
	I0729 13:38:34.769613  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.769910  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.772513  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.772833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.772859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.773005  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.775133  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775480  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.775506  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775607  300705 provision.go:143] copyHostCerts
	I0729 13:38:34.775671  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:34.775681  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:34.775738  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:34.775828  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:34.775836  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:34.775855  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:34.775909  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:34.775916  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:34.775932  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:34.775981  300705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.embed-certs-135920 san=[127.0.0.1 192.168.72.207 embed-certs-135920 localhost minikube]
	I0729 13:38:34.901161  300705 provision.go:177] copyRemoteCerts
	I0729 13:38:34.901230  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:34.901258  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.903730  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.904060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904245  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.904428  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.904606  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.904726  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:34.986647  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:35.010406  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:38:35.033884  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:35.057289  300705 provision.go:87] duration metric: took 287.670762ms to configureAuth
	I0729 13:38:35.057318  300705 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:35.057521  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:35.057621  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.060303  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060634  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.060667  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060840  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.061053  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061259  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061433  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.061599  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.061775  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.061792  300705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:35.344890  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:35.344923  300705 machine.go:97] duration metric: took 922.603779ms to provisionDockerMachine
	I0729 13:38:35.344936  300705 start.go:293] postStartSetup for "embed-certs-135920" (driver="kvm2")
	I0729 13:38:35.344947  300705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:35.344964  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.345304  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:35.345341  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.348029  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348420  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.348458  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348612  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.348832  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.348981  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.349112  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.431975  300705 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:35.436416  300705 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:35.436441  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:35.436522  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:35.436621  300705 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:35.436767  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:35.446166  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:35.473466  300705 start.go:296] duration metric: took 128.511199ms for postStartSetup
	I0729 13:38:35.473513  300705 fix.go:56] duration metric: took 18.867373858s for fixHost
	I0729 13:38:35.473540  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.476118  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476477  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.476504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476672  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.476877  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477093  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477241  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.477468  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.477642  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.477652  300705 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:35.577853  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260315.546644144
	
	I0729 13:38:35.577882  300705 fix.go:216] guest clock: 1722260315.546644144
	I0729 13:38:35.577892  300705 fix.go:229] Guest: 2024-07-29 13:38:35.546644144 +0000 UTC Remote: 2024-07-29 13:38:35.473518121 +0000 UTC m=+357.868969453 (delta=73.126023ms)
	I0729 13:38:35.577919  300705 fix.go:200] guest clock delta is within tolerance: 73.126023ms
	I0729 13:38:35.577926  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 18.971820448s
	I0729 13:38:35.577950  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.578260  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:35.581109  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581474  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.581507  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581707  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582287  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582451  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582562  300705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:35.582616  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.582645  300705 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:35.582673  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.585527  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585555  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585989  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586021  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586062  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586084  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586171  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586351  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586360  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586573  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586582  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586795  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586838  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.586942  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.686359  300705 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:35.692726  300705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:35.838487  300705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:35.844313  300705 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:35.844416  300705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:35.861079  300705 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:35.861103  300705 start.go:495] detecting cgroup driver to use...
	I0729 13:38:35.861178  300705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:35.880678  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:35.897996  300705 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:35.898070  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:35.915337  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:35.930990  300705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:36.039923  300705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:36.198255  300705 docker.go:233] disabling docker service ...
	I0729 13:38:36.198340  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:36.213373  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:36.227364  300705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:36.351279  300705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:36.468325  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:36.483692  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:36.503872  300705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:38:36.503945  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.515397  300705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:36.515502  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.527170  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.538668  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.550013  300705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:36.561402  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.573747  300705 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.594158  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.606047  300705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:36.616858  300705 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:36.616961  300705 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:36.633281  300705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:36.644423  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:36.779934  300705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:36.924394  300705 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:36.924483  300705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:36.929889  300705 start.go:563] Will wait 60s for crictl version
	I0729 13:38:36.929935  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:38:36.933671  300705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:36.973428  300705 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:36.973506  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.002245  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.034982  300705 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:38:37.036162  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:37.039092  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:37.039533  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039697  300705 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:37.044028  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:37.057278  300705 kubeadm.go:883] updating cluster {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:37.057398  300705 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:38:37.057504  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:37.096111  300705 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:38:37.096205  300705 ssh_runner.go:195] Run: which lz4
	I0729 13:38:37.100600  300705 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:37.104942  300705 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:37.104974  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:38:35.325849  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:37.326770  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.362042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:36.862022  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.361208  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.862020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.362115  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.861360  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.362077  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.861478  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.361278  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.861920  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.167072  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:40.667067  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:38.548671  300705 crio.go:462] duration metric: took 1.448103052s to copy over tarball
	I0729 13:38:38.548764  300705 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:40.801144  300705 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.252337742s)
	I0729 13:38:40.801177  300705 crio.go:469] duration metric: took 2.252468783s to extract the tarball
	I0729 13:38:40.801185  300705 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:40.840132  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:40.887424  300705 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:38:40.887447  300705 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:38:40.887456  300705 kubeadm.go:934] updating node { 192.168.72.207 8443 v1.30.3 crio true true} ...
	I0729 13:38:40.887583  300705 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-135920 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:40.887661  300705 ssh_runner.go:195] Run: crio config
	I0729 13:38:40.943732  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:40.943759  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:40.943771  300705 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:40.943801  300705 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.207 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-135920 NodeName:embed-certs-135920 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:38:40.943967  300705 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-135920"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:40.944048  300705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:38:40.954284  300705 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:40.954354  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:40.963877  300705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 13:38:40.981828  300705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:40.999273  300705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 13:38:41.016590  300705 ssh_runner.go:195] Run: grep 192.168.72.207	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:41.020149  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:41.031970  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:41.163779  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:41.181723  300705 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920 for IP: 192.168.72.207
	I0729 13:38:41.181746  300705 certs.go:194] generating shared ca certs ...
	I0729 13:38:41.181764  300705 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:41.181989  300705 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:41.182053  300705 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:41.182067  300705 certs.go:256] generating profile certs ...
	I0729 13:38:41.182191  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/client.key
	I0729 13:38:41.182257  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key.45ab1b35
	I0729 13:38:41.182306  300705 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key
	I0729 13:38:41.182454  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:41.182501  300705 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:41.182517  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:41.182553  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:41.182583  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:41.182607  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:41.182647  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:41.183522  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:41.239170  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:41.278086  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:41.318584  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:41.351639  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 13:38:41.389242  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:38:41.414897  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:41.439178  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:38:41.464278  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:41.488391  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:41.515271  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:41.539904  300705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:41.557036  300705 ssh_runner.go:195] Run: openssl version
	I0729 13:38:41.562935  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:41.580782  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585603  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585670  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.591504  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:41.602129  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:41.612441  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616813  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616866  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.622328  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:41.633108  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:41.643897  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648369  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648415  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.654085  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:41.665037  300705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:41.670067  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:41.676340  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:41.682386  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:41.688809  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:41.694957  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:41.700469  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:41.706471  300705 kubeadm.go:392] StartCluster: {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:41.706561  300705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:41.706617  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.746623  300705 cri.go:89] found id: ""
	I0729 13:38:41.746703  300705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:41.757101  300705 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:41.757121  300705 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:41.757174  300705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:41.766817  300705 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:41.767837  300705 kubeconfig.go:125] found "embed-certs-135920" server: "https://192.168.72.207:8443"
	I0729 13:38:41.770191  300705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:41.779930  300705 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.207
	I0729 13:38:41.779961  300705 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:41.779976  300705 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:41.780030  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.816273  300705 cri.go:89] found id: ""
	I0729 13:38:41.816350  300705 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:41.836512  300705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:41.847230  300705 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:41.847249  300705 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:41.847297  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:41.856215  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:41.856262  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:41.866646  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:41.876656  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:41.876723  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:41.886810  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.895693  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:41.895755  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.904774  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:41.915232  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:41.915301  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:41.924961  300705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:41.937051  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:42.059359  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:39.329415  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.826891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.361613  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:41.861155  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.361524  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.862047  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.361778  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.862055  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.861737  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.361194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.862019  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.326814  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:45.666203  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:42.934386  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.142119  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.221754  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.346345  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:43.346451  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.847275  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.347551  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.391680  300705 api_server.go:72] duration metric: took 1.045336573s to wait for apiserver process to appear ...
	I0729 13:38:44.391709  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:44.391735  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:44.392354  300705 api_server.go:269] stopped: https://192.168.72.207:8443/healthz: Get "https://192.168.72.207:8443/healthz": dial tcp 192.168.72.207:8443: connect: connection refused
	I0729 13:38:44.892773  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.149059  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.149101  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.149128  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.161645  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.161672  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.391878  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.396499  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.396527  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:47.892015  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.897406  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.897436  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:48.391867  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:48.395941  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:38:48.401926  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:48.401951  300705 api_server.go:131] duration metric: took 4.010234721s to wait for apiserver health ...
	I0729 13:38:48.401962  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:48.401970  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:48.403912  300705 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:44.073092  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:46.327011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:48.405332  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:48.416550  300705 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:48.439881  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:48.452435  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:48.452477  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:48.452527  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:48.452544  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:48.452556  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:48.452575  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:48.452584  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:48.452594  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:48.452604  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:48.452617  300705 system_pods.go:74] duration metric: took 12.710662ms to wait for pod list to return data ...
	I0729 13:38:48.452629  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:48.455453  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:48.455484  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:48.455497  300705 node_conditions.go:105] duration metric: took 2.858433ms to run NodePressure ...
	I0729 13:38:48.455518  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:48.791507  300705 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796191  300705 kubeadm.go:739] kubelet initialised
	I0729 13:38:48.796213  300705 kubeadm.go:740] duration metric: took 4.674843ms waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796222  300705 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:48.802395  300705 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.807224  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807247  300705 pod_ready.go:81] duration metric: took 4.825485ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.807263  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807269  300705 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.812485  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812516  300705 pod_ready.go:81] duration metric: took 5.235923ms for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.812529  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812536  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.817345  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817374  300705 pod_ready.go:81] duration metric: took 4.827847ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.817383  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817390  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.843709  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843754  300705 pod_ready.go:81] duration metric: took 26.35618ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.843775  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843783  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.243226  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243257  300705 pod_ready.go:81] duration metric: took 399.464753ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.243269  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243278  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.643370  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643399  300705 pod_ready.go:81] duration metric: took 400.112533ms for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.643410  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643416  300705 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:50.044089  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044119  300705 pod_ready.go:81] duration metric: took 400.694081ms for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:50.044128  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044135  300705 pod_ready.go:38] duration metric: took 1.247904039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:50.044153  300705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:50.055730  300705 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:50.055755  300705 kubeadm.go:597] duration metric: took 8.298625813s to restartPrimaryControlPlane
	I0729 13:38:50.055765  300705 kubeadm.go:394] duration metric: took 8.349303256s to StartCluster
	I0729 13:38:50.055785  300705 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.055869  300705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:50.057734  300705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.058013  300705 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:50.058092  300705 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:50.058165  300705 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-135920"
	I0729 13:38:50.058216  300705 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-135920"
	W0729 13:38:50.058230  300705 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:50.058217  300705 addons.go:69] Setting default-storageclass=true in profile "embed-certs-135920"
	I0729 13:38:50.058244  300705 addons.go:69] Setting metrics-server=true in profile "embed-certs-135920"
	I0729 13:38:50.058268  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058270  300705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-135920"
	I0729 13:38:50.058297  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:50.058305  300705 addons.go:234] Setting addon metrics-server=true in "embed-certs-135920"
	W0729 13:38:50.058350  300705 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:50.058416  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058719  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058746  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058763  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058766  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058732  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058835  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.061029  300705 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:50.062610  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:50.074642  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0729 13:38:50.074661  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0729 13:38:50.075119  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0729 13:38:50.075217  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075310  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075570  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075833  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.075856  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076049  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076066  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076273  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076367  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076393  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076434  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076620  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.076863  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.076912  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.076959  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.077488  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.077519  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.080392  300705 addons.go:234] Setting addon default-storageclass=true in "embed-certs-135920"
	W0729 13:38:50.080419  300705 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:50.080458  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.080872  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.080914  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.093352  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I0729 13:38:50.093981  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.094704  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.094742  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.095201  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.095452  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.095863  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0729 13:38:50.096287  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096506  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0729 13:38:50.096945  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096974  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.096991  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.097343  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.097408  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.097508  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.097529  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.099585  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.099600  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.099936  300705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:50.100730  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.100765  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.101377  300705 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.101399  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:50.101424  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.101563  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.103218  300705 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:46.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:46.862046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.362045  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.361183  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.862026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.361204  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.861490  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.361635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.861519  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.104927  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:50.104948  300705 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:50.104971  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.105309  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106036  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.106207  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106369  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.106615  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.106716  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.106817  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.108316  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.108859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108908  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.109081  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.109240  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.109354  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.119251  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0729 13:38:50.119703  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.120206  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.120235  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.120620  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.120813  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.122685  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.122898  300705 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.122910  300705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:50.122923  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.125412  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.125875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.125914  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.126140  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.126321  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.126448  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.126566  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.254664  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:50.276352  300705 node_ready.go:35] waiting up to 6m0s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:50.328315  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.412968  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.459653  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:50.459697  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:50.513203  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:50.513237  300705 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:50.576439  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.576469  300705 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:50.611994  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.701214  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701569  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.701636  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701647  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701657  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701663  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701909  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701936  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701939  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.707113  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.707130  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.707390  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.707407  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.707407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.625719  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212712139s)
	I0729 13:38:51.625766  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.625778  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626066  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.626109  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626117  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.626135  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.626143  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626412  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626430  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662030  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.049982518s)
	I0729 13:38:51.662094  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662110  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.662391  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.662759  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.662781  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662798  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.663076  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.663117  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.663126  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.663138  300705 addons.go:475] Verifying addon metrics-server=true in "embed-certs-135920"
	I0729 13:38:51.666005  300705 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 13:38:47.666568  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.167349  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.667365  300705 addons.go:510] duration metric: took 1.609276005s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 13:38:52.280219  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.826113  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.826826  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:53.327720  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:51.861510  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.362026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.861182  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.361850  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.861931  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.362035  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.861192  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.361173  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.862018  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.665875  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.666184  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.779805  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:56.780550  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:55.826349  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:58.326186  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:56.361740  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:56.862033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.362084  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.861406  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.861194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.361788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.861962  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.362043  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.862000  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.166551  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:59.167246  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.666773  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:57.780677  300705 node_ready.go:49] node "embed-certs-135920" has status "Ready":"True"
	I0729 13:38:57.780700  300705 node_ready.go:38] duration metric: took 7.504317897s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:57.780709  300705 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:57.786299  300705 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791107  300705 pod_ready.go:92] pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:57.791132  300705 pod_ready.go:81] duration metric: took 4.805712ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791143  300705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:59.806437  300705 pod_ready.go:102] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:00.296725  300705 pod_ready.go:92] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.296772  300705 pod_ready.go:81] duration metric: took 2.505622037s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.296782  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302450  300705 pod_ready.go:92] pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.302471  300705 pod_ready.go:81] duration metric: took 5.680644ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302482  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306734  300705 pod_ready.go:92] pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.306753  300705 pod_ready.go:81] duration metric: took 4.264085ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306762  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311745  300705 pod_ready.go:92] pod "kube-proxy-sn8bc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.311763  300705 pod_ready.go:81] duration metric: took 4.990061ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311773  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817465  300705 pod_ready.go:92] pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:01.817489  300705 pod_ready.go:81] duration metric: took 1.50570948s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817499  300705 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.825911  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.325485  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.362213  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:01.861107  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.361767  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.861151  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.361607  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.862013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.362032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.861858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.361611  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.862037  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.667047  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.166825  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.826817  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.326374  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:05.325891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:07.326167  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.362002  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:06.861635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.361659  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.862061  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.862083  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.361356  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.861763  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.361420  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.861822  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.666165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:10.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:08.824692  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.324207  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:09.326609  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.826082  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.362046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:11.861909  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.861834  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.361461  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.861666  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.861830  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.361141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.862003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.167800  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.665790  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:13.325286  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.826111  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:14.327217  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.826625  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.361731  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:16.862014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.361702  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.862141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.361808  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.361104  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.861123  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.361276  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.861176  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.666780  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.165629  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:18.328096  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.824426  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:19.326628  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.825705  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.362052  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:21.861150  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.361802  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.861996  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.362106  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.861135  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.361998  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.862048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.361848  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.861813  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.666434  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.666549  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:22.824988  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.825210  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.825579  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:23.826380  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:25.826544  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:27.826988  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:26.861651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:26.861733  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:26.904275  301425 cri.go:89] found id: ""
	I0729 13:39:26.904307  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.904315  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:26.904322  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:26.904387  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:26.946925  301425 cri.go:89] found id: ""
	I0729 13:39:26.946954  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.946966  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:26.946973  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:26.947036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:26.979236  301425 cri.go:89] found id: ""
	I0729 13:39:26.979267  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.979276  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:26.979282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:26.979330  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:27.022185  301425 cri.go:89] found id: ""
	I0729 13:39:27.022212  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.022220  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:27.022226  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:27.022277  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:27.055228  301425 cri.go:89] found id: ""
	I0729 13:39:27.055256  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.055266  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:27.055274  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:27.055335  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:27.088885  301425 cri.go:89] found id: ""
	I0729 13:39:27.088918  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.088926  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:27.088933  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:27.088986  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:27.123861  301425 cri.go:89] found id: ""
	I0729 13:39:27.123893  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.123902  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:27.123915  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:27.123967  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:27.157921  301425 cri.go:89] found id: ""
	I0729 13:39:27.157956  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.157964  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:27.157988  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:27.158003  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.222447  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:27.222489  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:27.265646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:27.265680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:27.317344  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:27.317388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:27.333664  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:27.333689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:27.460502  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:29.960703  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:29.974159  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:29.974235  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:30.009701  301425 cri.go:89] found id: ""
	I0729 13:39:30.009740  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.009753  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:30.009761  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:30.009822  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:30.045806  301425 cri.go:89] found id: ""
	I0729 13:39:30.045841  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.045853  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:30.045860  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:30.045924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:30.078709  301425 cri.go:89] found id: ""
	I0729 13:39:30.078738  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.078747  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:30.078753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:30.078808  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:30.112884  301425 cri.go:89] found id: ""
	I0729 13:39:30.112920  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.112932  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:30.112943  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:30.113012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:30.148160  301425 cri.go:89] found id: ""
	I0729 13:39:30.148196  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.148208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:30.148217  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:30.148285  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:30.186939  301425 cri.go:89] found id: ""
	I0729 13:39:30.186967  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.186975  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:30.186981  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:30.187039  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:30.241888  301425 cri.go:89] found id: ""
	I0729 13:39:30.241915  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.241926  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:30.241934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:30.242009  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:30.281482  301425 cri.go:89] found id: ""
	I0729 13:39:30.281510  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.281518  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:30.281527  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:30.281540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:30.321688  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:30.321730  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:30.378464  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:30.378508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:30.394109  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:30.394150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:30.474077  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:30.474101  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:30.474118  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.166322  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.166623  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.666142  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.323534  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.324750  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:30.327219  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:32.826011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.046016  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:33.059705  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:33.059795  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:33.096521  301425 cri.go:89] found id: ""
	I0729 13:39:33.096549  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.096557  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:33.096564  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:33.096621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:33.131262  301425 cri.go:89] found id: ""
	I0729 13:39:33.131295  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.131307  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:33.131314  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:33.131378  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:33.168889  301425 cri.go:89] found id: ""
	I0729 13:39:33.168915  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.168925  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:33.168932  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:33.168994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:33.205513  301425 cri.go:89] found id: ""
	I0729 13:39:33.205547  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.205558  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:33.205567  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:33.205644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:33.247051  301425 cri.go:89] found id: ""
	I0729 13:39:33.247079  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.247087  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:33.247093  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:33.247149  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:33.279541  301425 cri.go:89] found id: ""
	I0729 13:39:33.279575  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.279587  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:33.279596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:33.279659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:33.314000  301425 cri.go:89] found id: ""
	I0729 13:39:33.314034  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.314046  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:33.314054  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:33.314117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:33.351363  301425 cri.go:89] found id: ""
	I0729 13:39:33.351390  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.351401  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:33.351412  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:33.351437  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:33.413509  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:33.413547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:33.428128  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:33.428165  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:33.495430  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:33.495461  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:33.495478  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:33.574060  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:33.574098  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:34.166133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.167919  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.823668  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.824684  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.326216  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826516  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.113561  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:36.126899  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:36.126965  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:36.163363  301425 cri.go:89] found id: ""
	I0729 13:39:36.163396  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.163407  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:36.163414  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:36.163473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:36.205215  301425 cri.go:89] found id: ""
	I0729 13:39:36.205243  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.205259  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:36.205267  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:36.205331  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:36.243166  301425 cri.go:89] found id: ""
	I0729 13:39:36.243220  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.243231  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:36.243239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:36.243295  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:36.280804  301425 cri.go:89] found id: ""
	I0729 13:39:36.280836  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.280845  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:36.280852  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:36.280903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:36.317291  301425 cri.go:89] found id: ""
	I0729 13:39:36.317320  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.317330  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:36.317337  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:36.317399  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:36.358111  301425 cri.go:89] found id: ""
	I0729 13:39:36.358145  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.358156  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:36.358164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:36.358229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:36.399407  301425 cri.go:89] found id: ""
	I0729 13:39:36.399440  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.399451  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:36.399459  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:36.399525  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:36.437876  301425 cri.go:89] found id: ""
	I0729 13:39:36.437904  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.437914  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:36.437926  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:36.437942  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:36.514464  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:36.514493  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:36.514511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:36.592036  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:36.592083  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:36.647650  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:36.647691  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:36.706890  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:36.706935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.226070  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:39.239313  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:39.239373  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:39.274158  301425 cri.go:89] found id: ""
	I0729 13:39:39.274191  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.274202  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:39.274210  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:39.274286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:39.308448  301425 cri.go:89] found id: ""
	I0729 13:39:39.308484  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.308492  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:39.308499  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:39.308563  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:39.347745  301425 cri.go:89] found id: ""
	I0729 13:39:39.347782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.347791  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:39.347798  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:39.347856  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:39.380649  301425 cri.go:89] found id: ""
	I0729 13:39:39.380679  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.380688  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:39.380696  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:39.380767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:39.415076  301425 cri.go:89] found id: ""
	I0729 13:39:39.415107  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.415115  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:39.415120  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:39.415170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:39.450749  301425 cri.go:89] found id: ""
	I0729 13:39:39.450782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.450793  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:39.450801  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:39.450864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:39.482148  301425 cri.go:89] found id: ""
	I0729 13:39:39.482175  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.482184  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:39.482190  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:39.482239  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:39.518558  301425 cri.go:89] found id: ""
	I0729 13:39:39.518588  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.518597  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:39.518608  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:39.518622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:39.555753  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:39.555786  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:39.606627  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:39.606661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.620359  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:39.620388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:39.690685  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:39.690711  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:39.690728  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:38.665446  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.666445  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826801  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.325166  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:39.827390  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.326038  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.271925  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:42.284365  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:42.284447  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:42.318966  301425 cri.go:89] found id: ""
	I0729 13:39:42.318998  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.319020  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:42.319028  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:42.319111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:42.354811  301425 cri.go:89] found id: ""
	I0729 13:39:42.354840  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.354854  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:42.354862  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:42.354917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:42.402524  301425 cri.go:89] found id: ""
	I0729 13:39:42.402557  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.402569  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:42.402577  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:42.402643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:42.460954  301425 cri.go:89] found id: ""
	I0729 13:39:42.460984  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.461001  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:42.461010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:42.461063  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:42.516849  301425 cri.go:89] found id: ""
	I0729 13:39:42.516880  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.516890  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:42.516898  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:42.516963  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:42.560289  301425 cri.go:89] found id: ""
	I0729 13:39:42.560316  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.560325  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:42.560332  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:42.560397  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:42.597798  301425 cri.go:89] found id: ""
	I0729 13:39:42.597829  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.597839  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:42.597847  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:42.597912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:42.633015  301425 cri.go:89] found id: ""
	I0729 13:39:42.633043  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.633059  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:42.633068  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:42.633080  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:42.711103  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:42.711126  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:42.711141  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:42.787459  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:42.787499  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:42.828965  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:42.829002  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:42.881702  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:42.881740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:45.396462  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:45.410766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:45.410859  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:45.445886  301425 cri.go:89] found id: ""
	I0729 13:39:45.445931  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.445943  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:45.445960  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:45.446023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:45.484293  301425 cri.go:89] found id: ""
	I0729 13:39:45.484326  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.484338  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:45.484346  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:45.484410  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:45.520209  301425 cri.go:89] found id: ""
	I0729 13:39:45.520237  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.520246  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:45.520252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:45.520300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:45.555671  301425 cri.go:89] found id: ""
	I0729 13:39:45.555702  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.555711  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:45.555717  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:45.555767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:45.594578  301425 cri.go:89] found id: ""
	I0729 13:39:45.594609  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.594618  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:45.594624  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:45.594685  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:45.631777  301425 cri.go:89] found id: ""
	I0729 13:39:45.631805  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.631817  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:45.631825  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:45.631881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:45.667163  301425 cri.go:89] found id: ""
	I0729 13:39:45.667189  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.667197  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:45.667203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:45.667258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:45.703393  301425 cri.go:89] found id: ""
	I0729 13:39:45.703434  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.703443  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:45.703454  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:45.703488  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:45.774424  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:45.774452  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:45.774472  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:45.857529  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:45.857586  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:45.899737  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:45.899775  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:45.952640  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:45.952685  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:42.666728  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.165982  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.825543  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.323544  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:47.323595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:44.825237  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:46.825276  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.467705  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:48.482292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:48.482380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:48.520146  301425 cri.go:89] found id: ""
	I0729 13:39:48.520181  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.520195  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:48.520204  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:48.520282  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:48.552623  301425 cri.go:89] found id: ""
	I0729 13:39:48.552654  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.552665  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:48.552672  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:48.552734  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:48.587254  301425 cri.go:89] found id: ""
	I0729 13:39:48.587290  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.587303  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:48.587309  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:48.587368  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:48.621045  301425 cri.go:89] found id: ""
	I0729 13:39:48.621076  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.621088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:48.621096  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:48.621160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:48.654117  301425 cri.go:89] found id: ""
	I0729 13:39:48.654151  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.654163  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:48.654171  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:48.654236  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:48.693108  301425 cri.go:89] found id: ""
	I0729 13:39:48.693149  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.693166  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:48.693173  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:48.693225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:48.733000  301425 cri.go:89] found id: ""
	I0729 13:39:48.733025  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.733033  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:48.733039  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:48.733088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:48.773761  301425 cri.go:89] found id: ""
	I0729 13:39:48.773789  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.773798  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:48.773807  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:48.773822  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:48.826655  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:48.826683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:48.840335  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:48.840364  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:48.913727  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:48.913754  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:48.913774  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:48.990196  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:48.990235  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:47.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.167105  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.667165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.324027  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.324146  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.825859  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.326299  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.533333  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:51.547115  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:51.547175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:51.583247  301425 cri.go:89] found id: ""
	I0729 13:39:51.583284  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.583292  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:51.583297  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:51.583350  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:51.618925  301425 cri.go:89] found id: ""
	I0729 13:39:51.618958  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.618969  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:51.618977  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:51.619036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:51.657099  301425 cri.go:89] found id: ""
	I0729 13:39:51.657132  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.657144  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:51.657151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:51.657210  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:51.695413  301425 cri.go:89] found id: ""
	I0729 13:39:51.695459  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.695471  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:51.695480  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:51.695553  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:51.731153  301425 cri.go:89] found id: ""
	I0729 13:39:51.731186  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.731198  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:51.731206  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:51.731271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:51.765662  301425 cri.go:89] found id: ""
	I0729 13:39:51.765716  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.765730  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:51.765740  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:51.765807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:51.800442  301425 cri.go:89] found id: ""
	I0729 13:39:51.800480  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.800491  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:51.800500  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:51.800562  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:51.844516  301425 cri.go:89] found id: ""
	I0729 13:39:51.844542  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.844551  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:51.844562  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:51.844580  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:51.896139  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:51.896176  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:51.910479  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:51.910511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:51.980025  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:51.980052  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:51.980071  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:52.054674  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:52.054717  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.596468  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:54.612233  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:54.612344  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:54.653506  301425 cri.go:89] found id: ""
	I0729 13:39:54.653547  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.653558  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:54.653565  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:54.653624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:54.696964  301425 cri.go:89] found id: ""
	I0729 13:39:54.697002  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.697015  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:54.697023  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:54.697088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:54.731165  301425 cri.go:89] found id: ""
	I0729 13:39:54.731196  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.731207  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:54.731214  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:54.731279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:54.774397  301425 cri.go:89] found id: ""
	I0729 13:39:54.774426  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.774437  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:54.774444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:54.774506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:54.813365  301425 cri.go:89] found id: ""
	I0729 13:39:54.813396  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.813408  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:54.813414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:54.813480  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:54.849936  301425 cri.go:89] found id: ""
	I0729 13:39:54.849962  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.849970  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:54.849980  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:54.850042  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:54.883979  301425 cri.go:89] found id: ""
	I0729 13:39:54.884007  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.884015  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:54.884021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:54.884087  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:54.919754  301425 cri.go:89] found id: ""
	I0729 13:39:54.919779  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.919787  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:54.919796  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:54.919817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:54.973082  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:54.973117  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:54.986534  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:54.986571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:55.055473  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:55.055499  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:55.055514  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:55.138278  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:55.138322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.166585  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:56.166714  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.824525  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.824559  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.825238  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.826464  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.826664  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.683818  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:57.698992  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:57.699070  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:57.742071  301425 cri.go:89] found id: ""
	I0729 13:39:57.742103  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.742113  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:57.742121  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:57.742185  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:57.777871  301425 cri.go:89] found id: ""
	I0729 13:39:57.777902  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.777911  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:57.777918  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:57.777975  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:57.817767  301425 cri.go:89] found id: ""
	I0729 13:39:57.817798  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.817809  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:57.817817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:57.817889  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:57.855608  301425 cri.go:89] found id: ""
	I0729 13:39:57.855634  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.855644  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:57.855651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:57.855714  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:57.891219  301425 cri.go:89] found id: ""
	I0729 13:39:57.891248  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.891258  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:57.891266  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:57.891336  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:57.926000  301425 cri.go:89] found id: ""
	I0729 13:39:57.926034  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.926045  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:57.926053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:57.926116  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:57.964935  301425 cri.go:89] found id: ""
	I0729 13:39:57.964962  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.964978  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:57.964985  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:57.965051  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:58.001363  301425 cri.go:89] found id: ""
	I0729 13:39:58.001393  301425 logs.go:276] 0 containers: []
	W0729 13:39:58.001405  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:58.001417  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:58.001434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:58.057551  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:58.057598  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:58.072162  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:58.072200  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:58.140533  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:58.140565  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:58.140582  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:58.227285  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:58.227330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:00.769075  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:00.783394  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:00.783471  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:00.831260  301425 cri.go:89] found id: ""
	I0729 13:40:00.831291  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.831301  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:00.831309  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:00.831370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:00.870017  301425 cri.go:89] found id: ""
	I0729 13:40:00.870045  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.870057  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:00.870065  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:00.870127  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:00.904691  301425 cri.go:89] found id: ""
	I0729 13:40:00.904728  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.904740  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:00.904748  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:00.904828  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:00.937221  301425 cri.go:89] found id: ""
	I0729 13:40:00.937249  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.937259  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:00.937265  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:00.937329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:58.167355  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.666837  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.824755  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.324616  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.325368  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.325689  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.326062  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.977961  301425 cri.go:89] found id: ""
	I0729 13:40:00.977991  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.978002  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:00.978010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:00.978104  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:01.014239  301425 cri.go:89] found id: ""
	I0729 13:40:01.014271  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.014283  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:01.014292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:01.014362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:01.050583  301425 cri.go:89] found id: ""
	I0729 13:40:01.050615  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.050630  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:01.050637  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:01.050696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:01.091599  301425 cri.go:89] found id: ""
	I0729 13:40:01.091624  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.091634  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:01.091643  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:01.091661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:01.146404  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:01.146445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:01.160327  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:01.160358  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:01.237120  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:01.237147  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:01.237162  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:01.321539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:01.321590  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:03.865268  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:03.879648  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:03.879724  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:03.915303  301425 cri.go:89] found id: ""
	I0729 13:40:03.915329  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.915338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:03.915344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:03.915403  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:03.951982  301425 cri.go:89] found id: ""
	I0729 13:40:03.952014  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.952023  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:03.952032  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:03.952099  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:03.989751  301425 cri.go:89] found id: ""
	I0729 13:40:03.989785  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.989796  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:03.989804  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:03.989870  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:04.026934  301425 cri.go:89] found id: ""
	I0729 13:40:04.026975  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.026988  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:04.026996  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:04.027059  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:04.064135  301425 cri.go:89] found id: ""
	I0729 13:40:04.064165  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.064175  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:04.064187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:04.064256  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:04.103080  301425 cri.go:89] found id: ""
	I0729 13:40:04.103108  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.103117  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:04.103123  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:04.103172  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:04.143370  301425 cri.go:89] found id: ""
	I0729 13:40:04.143403  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.143414  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:04.143422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:04.143491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:04.179251  301425 cri.go:89] found id: ""
	I0729 13:40:04.179286  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.179298  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:04.179311  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:04.179330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:04.261058  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:04.261089  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:04.261111  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:04.342897  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:04.342935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:04.391504  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:04.391532  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:04.443064  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:04.443106  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:03.166195  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:05.166660  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.824882  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:07.324346  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.326236  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.825685  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.959346  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:06.974377  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:06.974444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:07.007797  301425 cri.go:89] found id: ""
	I0729 13:40:07.007834  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.007847  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:07.007856  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:07.007924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:07.042707  301425 cri.go:89] found id: ""
	I0729 13:40:07.042741  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.042749  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:07.042755  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:07.042807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:07.080150  301425 cri.go:89] found id: ""
	I0729 13:40:07.080185  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.080196  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:07.080203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:07.080268  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:07.115740  301425 cri.go:89] found id: ""
	I0729 13:40:07.115777  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.115788  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:07.115796  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:07.115888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:07.154110  301425 cri.go:89] found id: ""
	I0729 13:40:07.154141  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.154151  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:07.154158  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:07.154225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:07.190819  301425 cri.go:89] found id: ""
	I0729 13:40:07.190850  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.190858  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:07.190865  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:07.190917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:07.231530  301425 cri.go:89] found id: ""
	I0729 13:40:07.231560  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.231571  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:07.231579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:07.231643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:07.272211  301425 cri.go:89] found id: ""
	I0729 13:40:07.272240  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.272247  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:07.272257  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:07.272269  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.326673  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:07.326704  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:07.341255  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:07.341282  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:07.409850  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:07.409878  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:07.409895  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:07.493105  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:07.493169  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.033906  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:10.047938  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:10.048018  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:10.084224  301425 cri.go:89] found id: ""
	I0729 13:40:10.084251  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.084259  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:10.084265  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:10.084316  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:10.120362  301425 cri.go:89] found id: ""
	I0729 13:40:10.120398  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.120409  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:10.120417  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:10.120484  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:10.154128  301425 cri.go:89] found id: ""
	I0729 13:40:10.154160  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.154170  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:10.154178  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:10.154243  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:10.189539  301425 cri.go:89] found id: ""
	I0729 13:40:10.189574  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.189588  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:10.189596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:10.189661  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:10.228821  301425 cri.go:89] found id: ""
	I0729 13:40:10.228855  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.228867  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:10.228875  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:10.228950  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:10.274726  301425 cri.go:89] found id: ""
	I0729 13:40:10.274758  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.274769  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:10.274776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:10.274845  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:10.308910  301425 cri.go:89] found id: ""
	I0729 13:40:10.308945  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.308956  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:10.308964  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:10.309030  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:10.346008  301425 cri.go:89] found id: ""
	I0729 13:40:10.346044  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.346056  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:10.346069  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:10.346091  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:10.360541  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:10.360581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:10.433763  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:10.433788  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:10.433802  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:10.520366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:10.520418  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.561482  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:10.561512  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.668816  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:10.166833  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:09.823429  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.824033  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:08.826798  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.326762  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.327128  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.114858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:13.128348  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:13.128425  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:13.165329  301425 cri.go:89] found id: ""
	I0729 13:40:13.165359  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.165370  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:13.165377  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:13.165441  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:13.200104  301425 cri.go:89] found id: ""
	I0729 13:40:13.200135  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.200148  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:13.200155  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:13.200224  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:13.238632  301425 cri.go:89] found id: ""
	I0729 13:40:13.238680  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.238688  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:13.238694  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:13.238748  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:13.270859  301425 cri.go:89] found id: ""
	I0729 13:40:13.270892  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.270901  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:13.270907  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:13.270976  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:13.308346  301425 cri.go:89] found id: ""
	I0729 13:40:13.308378  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.308386  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:13.308392  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:13.308444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:13.346286  301425 cri.go:89] found id: ""
	I0729 13:40:13.346319  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.346331  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:13.346339  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:13.346412  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:13.383699  301425 cri.go:89] found id: ""
	I0729 13:40:13.383736  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.383769  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:13.383791  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:13.383850  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:13.419958  301425 cri.go:89] found id: ""
	I0729 13:40:13.420045  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.420058  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:13.420071  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:13.420094  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:13.473984  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:13.474028  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:13.488376  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:13.488410  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:13.559515  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:13.559543  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:13.559560  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:13.640528  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:13.640570  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:12.665799  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.666662  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.668217  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.323746  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.323961  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:15.826422  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.326284  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.189581  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:16.203962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:16.204052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:16.240537  301425 cri.go:89] found id: ""
	I0729 13:40:16.240572  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.240583  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:16.240591  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:16.240659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:16.277060  301425 cri.go:89] found id: ""
	I0729 13:40:16.277099  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.277112  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:16.277123  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:16.277200  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:16.313839  301425 cri.go:89] found id: ""
	I0729 13:40:16.313869  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.313878  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:16.313884  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:16.313935  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:16.351806  301425 cri.go:89] found id: ""
	I0729 13:40:16.351840  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.351850  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:16.351858  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:16.351922  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:16.387122  301425 cri.go:89] found id: ""
	I0729 13:40:16.387158  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.387169  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:16.387176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:16.387242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:16.424180  301425 cri.go:89] found id: ""
	I0729 13:40:16.424209  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.424220  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:16.424229  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:16.424292  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:16.461827  301425 cri.go:89] found id: ""
	I0729 13:40:16.461865  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.461879  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:16.461889  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:16.461946  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:16.510198  301425 cri.go:89] found id: ""
	I0729 13:40:16.510230  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.510238  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:16.510248  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:16.510264  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:16.585378  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:16.585420  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:16.629304  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:16.629337  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:16.682386  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:16.682434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:16.698405  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:16.698436  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:16.770281  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.270551  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:19.284543  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:19.284617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:19.325194  301425 cri.go:89] found id: ""
	I0729 13:40:19.325221  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.325231  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:19.325238  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:19.325298  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:19.362007  301425 cri.go:89] found id: ""
	I0729 13:40:19.362038  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.362058  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:19.362066  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:19.362196  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:19.401162  301425 cri.go:89] found id: ""
	I0729 13:40:19.401191  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.401202  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:19.401210  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:19.401274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:19.434652  301425 cri.go:89] found id: ""
	I0729 13:40:19.434689  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.434700  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:19.434709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:19.434774  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:19.470116  301425 cri.go:89] found id: ""
	I0729 13:40:19.470149  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.470157  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:19.470164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:19.470218  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:19.503593  301425 cri.go:89] found id: ""
	I0729 13:40:19.503621  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.503629  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:19.503635  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:19.503696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:19.546127  301425 cri.go:89] found id: ""
	I0729 13:40:19.546155  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.546164  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:19.546169  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:19.546217  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:19.584600  301425 cri.go:89] found id: ""
	I0729 13:40:19.584639  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.584650  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:19.584663  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:19.584681  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:19.599411  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:19.599446  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:19.665811  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.665836  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:19.665853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:19.747295  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:19.747339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:19.790476  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:19.790516  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:18.669004  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.166437  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.824788  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.327093  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:20.825470  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.827651  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.346725  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:22.361349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:22.361443  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:22.394840  301425 cri.go:89] found id: ""
	I0729 13:40:22.394870  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.394881  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:22.394889  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:22.394956  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:22.429328  301425 cri.go:89] found id: ""
	I0729 13:40:22.429356  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.429364  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:22.429370  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:22.429431  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:22.463179  301425 cri.go:89] found id: ""
	I0729 13:40:22.463206  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.463214  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:22.463220  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:22.463291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:22.497527  301425 cri.go:89] found id: ""
	I0729 13:40:22.497557  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.497565  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:22.497571  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:22.497627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:22.537607  301425 cri.go:89] found id: ""
	I0729 13:40:22.537635  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.537646  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:22.537654  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:22.537718  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:22.580658  301425 cri.go:89] found id: ""
	I0729 13:40:22.580689  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.580701  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:22.580709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:22.580775  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:22.622229  301425 cri.go:89] found id: ""
	I0729 13:40:22.622261  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.622270  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:22.622282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:22.622346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:22.660091  301425 cri.go:89] found id: ""
	I0729 13:40:22.660120  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.660129  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:22.660139  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:22.660153  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:22.715053  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:22.715090  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:22.728865  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:22.728898  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:22.805760  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:22.805785  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:22.805799  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:22.890915  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:22.890960  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:25.457272  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:25.471002  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:25.471088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:25.506190  301425 cri.go:89] found id: ""
	I0729 13:40:25.506226  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.506237  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:25.506244  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:25.506297  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:25.540957  301425 cri.go:89] found id: ""
	I0729 13:40:25.540991  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.541002  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:25.541011  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:25.541074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:25.578378  301425 cri.go:89] found id: ""
	I0729 13:40:25.578424  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.578440  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:25.578448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:25.578518  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:25.620930  301425 cri.go:89] found id: ""
	I0729 13:40:25.620962  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.620979  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:25.620987  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:25.621056  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:25.655558  301425 cri.go:89] found id: ""
	I0729 13:40:25.655589  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.655597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:25.655604  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:25.655670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:25.688810  301425 cri.go:89] found id: ""
	I0729 13:40:25.688845  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.688855  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:25.688863  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:25.688930  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:25.724384  301425 cri.go:89] found id: ""
	I0729 13:40:25.724416  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.724428  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:25.724435  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:25.724514  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:25.763174  301425 cri.go:89] found id: ""
	I0729 13:40:25.763200  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.763209  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:25.763219  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:25.763232  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:25.818517  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:25.818569  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:25.833939  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:25.833973  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:25.910487  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:25.910515  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:25.910537  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:23.167028  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.666513  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:23.824183  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.827054  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.325894  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:27.824855  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.993887  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:25.993929  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:28.536843  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:28.550097  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:28.550175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:28.592664  301425 cri.go:89] found id: ""
	I0729 13:40:28.592697  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.592709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:28.592716  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:28.592788  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:28.638299  301425 cri.go:89] found id: ""
	I0729 13:40:28.638329  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.638337  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:28.638343  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:28.638395  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:28.682410  301425 cri.go:89] found id: ""
	I0729 13:40:28.682437  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.682446  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:28.682452  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:28.682511  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:28.719402  301425 cri.go:89] found id: ""
	I0729 13:40:28.719430  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.719438  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:28.719444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:28.719504  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:28.767515  301425 cri.go:89] found id: ""
	I0729 13:40:28.767547  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.767559  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:28.767568  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:28.767633  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:28.811600  301425 cri.go:89] found id: ""
	I0729 13:40:28.811632  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.811644  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:28.811652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:28.811727  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:28.853364  301425 cri.go:89] found id: ""
	I0729 13:40:28.853397  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.853407  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:28.853414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:28.853486  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:28.890981  301425 cri.go:89] found id: ""
	I0729 13:40:28.891013  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.891024  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:28.891035  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:28.891050  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:28.944174  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:28.944213  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:28.957724  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:28.957755  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:29.026457  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:29.026479  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:29.026497  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:29.105366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:29.105415  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:27.667251  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.166789  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:28.323476  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.324242  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:32.325477  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:29.825621  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.828363  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.649374  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:31.663432  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:31.663512  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:31.702047  301425 cri.go:89] found id: ""
	I0729 13:40:31.702080  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.702088  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:31.702098  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:31.702162  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:31.738484  301425 cri.go:89] found id: ""
	I0729 13:40:31.738510  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.738518  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:31.738524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:31.738583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:31.774214  301425 cri.go:89] found id: ""
	I0729 13:40:31.774249  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.774261  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:31.774270  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:31.774339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:31.810263  301425 cri.go:89] found id: ""
	I0729 13:40:31.810293  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.810302  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:31.810307  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:31.810369  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:31.848124  301425 cri.go:89] found id: ""
	I0729 13:40:31.848153  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.848160  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:31.848167  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:31.848234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:31.885531  301425 cri.go:89] found id: ""
	I0729 13:40:31.885561  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.885571  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:31.885580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:31.885650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:31.923904  301425 cri.go:89] found id: ""
	I0729 13:40:31.923939  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.923952  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:31.923959  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:31.924029  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:31.957165  301425 cri.go:89] found id: ""
	I0729 13:40:31.957202  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.957213  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:31.957228  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:31.957248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:32.039221  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:32.039262  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.078191  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:32.078229  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:32.131871  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:32.131922  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:32.146676  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:32.146706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:32.223849  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:34.724927  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:34.739029  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:34.739113  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:34.774627  301425 cri.go:89] found id: ""
	I0729 13:40:34.774660  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.774669  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:34.774675  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:34.774743  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:34.809840  301425 cri.go:89] found id: ""
	I0729 13:40:34.809872  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.809882  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:34.809887  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:34.809940  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:34.847530  301425 cri.go:89] found id: ""
	I0729 13:40:34.847561  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.847572  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:34.847580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:34.847648  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:34.881828  301425 cri.go:89] found id: ""
	I0729 13:40:34.881856  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.881870  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:34.881876  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:34.881937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:34.918903  301425 cri.go:89] found id: ""
	I0729 13:40:34.918937  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.918949  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:34.918956  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:34.919015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:34.954714  301425 cri.go:89] found id: ""
	I0729 13:40:34.954749  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.954761  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:34.954770  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:34.954825  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:34.993433  301425 cri.go:89] found id: ""
	I0729 13:40:34.993463  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.993472  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:34.993478  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:34.993531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:35.033830  301425 cri.go:89] found id: ""
	I0729 13:40:35.033859  301425 logs.go:276] 0 containers: []
	W0729 13:40:35.033874  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:35.033884  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:35.033900  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:35.084546  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:35.084595  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:35.098807  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:35.098845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:35.182636  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:35.182662  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:35.182674  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:35.262767  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:35.262808  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.665817  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.670805  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.823905  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.824232  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.326644  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.825977  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:37.802033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:37.815633  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:37.815697  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:37.857522  301425 cri.go:89] found id: ""
	I0729 13:40:37.857552  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.857563  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:37.857571  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:37.857627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:37.897527  301425 cri.go:89] found id: ""
	I0729 13:40:37.897564  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.897575  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:37.897583  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:37.897649  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.937135  301425 cri.go:89] found id: ""
	I0729 13:40:37.937167  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.937176  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:37.937189  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:37.937255  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:37.972699  301425 cri.go:89] found id: ""
	I0729 13:40:37.972734  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.972751  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:37.972761  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:37.972933  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:38.012702  301425 cri.go:89] found id: ""
	I0729 13:40:38.012732  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.012740  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:38.012747  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:38.012832  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:38.050228  301425 cri.go:89] found id: ""
	I0729 13:40:38.050260  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.050268  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:38.050275  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:38.050329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:38.084665  301425 cri.go:89] found id: ""
	I0729 13:40:38.084693  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.084707  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:38.084715  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:38.084780  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:38.119155  301425 cri.go:89] found id: ""
	I0729 13:40:38.119200  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.119211  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:38.119222  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:38.119236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:38.170934  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:38.170968  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:38.185298  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:38.185329  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:38.256118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:38.256149  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:38.256166  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:38.337090  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:38.337127  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:40.876177  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:40.889580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:40.889655  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:40.922971  301425 cri.go:89] found id: ""
	I0729 13:40:40.923002  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.923010  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:40.923016  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:40.923074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:40.955840  301425 cri.go:89] found id: ""
	I0729 13:40:40.955872  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.955884  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:40.955891  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:40.955952  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.165718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.166160  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.168344  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:38.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.324607  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.324996  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.344232  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:40.993258  301425 cri.go:89] found id: ""
	I0729 13:40:40.993290  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.993298  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:40.993305  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:40.993357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:41.026370  301425 cri.go:89] found id: ""
	I0729 13:40:41.026398  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.026409  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:41.026416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:41.026473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:41.060538  301425 cri.go:89] found id: ""
	I0729 13:40:41.060565  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.060574  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:41.060579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:41.060630  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:41.105074  301425 cri.go:89] found id: ""
	I0729 13:40:41.105108  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.105118  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:41.105126  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:41.105193  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:41.138254  301425 cri.go:89] found id: ""
	I0729 13:40:41.138280  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.138288  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:41.138294  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:41.138342  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:41.171432  301425 cri.go:89] found id: ""
	I0729 13:40:41.171458  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.171466  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:41.171475  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:41.171487  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:41.184703  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:41.184736  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:41.265356  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:41.265392  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:41.265409  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:41.345939  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:41.345979  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:41.388819  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:41.388852  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:43.940388  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:43.955448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:43.955515  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:43.998457  301425 cri.go:89] found id: ""
	I0729 13:40:43.998494  301425 logs.go:276] 0 containers: []
	W0729 13:40:43.998506  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:43.998515  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:43.998584  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:44.038142  301425 cri.go:89] found id: ""
	I0729 13:40:44.038173  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.038185  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:44.038193  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:44.038260  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:44.077270  301425 cri.go:89] found id: ""
	I0729 13:40:44.077302  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.077313  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:44.077321  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:44.077391  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:44.117612  301425 cri.go:89] found id: ""
	I0729 13:40:44.117641  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.117661  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:44.117681  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:44.117749  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:44.152564  301425 cri.go:89] found id: ""
	I0729 13:40:44.152603  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.152615  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:44.152623  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:44.152683  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:44.188245  301425 cri.go:89] found id: ""
	I0729 13:40:44.188276  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.188288  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:44.188296  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:44.188355  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:44.224947  301425 cri.go:89] found id: ""
	I0729 13:40:44.224975  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.224983  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:44.224989  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:44.225037  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:44.264830  301425 cri.go:89] found id: ""
	I0729 13:40:44.264860  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.264867  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:44.264877  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:44.264893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:44.343145  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:44.343182  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:44.384619  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:44.384650  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:44.438195  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:44.438237  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:44.452115  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:44.452152  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:44.526586  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:43.666987  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.167143  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.825141  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.324972  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.827065  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.325488  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:47.027726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:47.041174  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:47.041242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:47.079265  301425 cri.go:89] found id: ""
	I0729 13:40:47.079295  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.079304  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:47.079313  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:47.079380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:47.119775  301425 cri.go:89] found id: ""
	I0729 13:40:47.119807  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.119820  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:47.119828  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:47.119904  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:47.155381  301425 cri.go:89] found id: ""
	I0729 13:40:47.155415  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.155426  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:47.155434  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:47.155490  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:47.195071  301425 cri.go:89] found id: ""
	I0729 13:40:47.195103  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.195111  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:47.195117  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:47.195167  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:47.229487  301425 cri.go:89] found id: ""
	I0729 13:40:47.229519  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.229531  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:47.229539  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:47.229611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:47.266159  301425 cri.go:89] found id: ""
	I0729 13:40:47.266190  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.266201  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:47.266209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:47.266269  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:47.300813  301425 cri.go:89] found id: ""
	I0729 13:40:47.300845  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.300854  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:47.300860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:47.300916  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:47.340378  301425 cri.go:89] found id: ""
	I0729 13:40:47.340412  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.340432  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:47.340444  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:47.340464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:47.395403  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:47.395444  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:47.409505  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:47.409539  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:47.481327  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:47.481349  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:47.481365  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:47.560129  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:47.560172  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.105832  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:50.121192  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:50.121264  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:50.160217  301425 cri.go:89] found id: ""
	I0729 13:40:50.160247  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.160256  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:50.160262  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:50.160313  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:50.199952  301425 cri.go:89] found id: ""
	I0729 13:40:50.199986  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.199998  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:50.200005  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:50.200065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:50.240036  301425 cri.go:89] found id: ""
	I0729 13:40:50.240069  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.240076  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:50.240083  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:50.240134  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:50.279761  301425 cri.go:89] found id: ""
	I0729 13:40:50.279788  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.279796  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:50.279802  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:50.279852  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:50.320324  301425 cri.go:89] found id: ""
	I0729 13:40:50.320350  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.320358  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:50.320364  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:50.320423  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:50.356385  301425 cri.go:89] found id: ""
	I0729 13:40:50.356413  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.356421  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:50.356427  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:50.356482  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:50.396866  301425 cri.go:89] found id: ""
	I0729 13:40:50.396900  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.396912  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:50.396919  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:50.397008  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:50.434778  301425 cri.go:89] found id: ""
	I0729 13:40:50.434812  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.434823  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:50.434836  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:50.434853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:50.447746  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:50.447776  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:50.523750  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:50.523772  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:50.523787  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:50.604206  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:50.604255  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.647414  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:50.647449  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:48.666463  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.666670  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.823595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.824045  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.826836  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:51.326943  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.327715  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.201653  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:53.215745  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:53.215814  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:53.250482  301425 cri.go:89] found id: ""
	I0729 13:40:53.250508  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.250516  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:53.250522  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:53.250583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:53.285956  301425 cri.go:89] found id: ""
	I0729 13:40:53.285988  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.285996  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:53.286002  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:53.286055  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:53.320248  301425 cri.go:89] found id: ""
	I0729 13:40:53.320281  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.320292  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:53.320300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:53.320364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:53.355155  301425 cri.go:89] found id: ""
	I0729 13:40:53.355188  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.355200  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:53.355209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:53.355271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:53.389519  301425 cri.go:89] found id: ""
	I0729 13:40:53.389549  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.389557  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:53.389564  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:53.389620  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:53.424391  301425 cri.go:89] found id: ""
	I0729 13:40:53.424419  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.424427  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:53.424433  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:53.424492  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:53.463297  301425 cri.go:89] found id: ""
	I0729 13:40:53.463331  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.463342  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:53.463350  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:53.463433  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:53.497565  301425 cri.go:89] found id: ""
	I0729 13:40:53.497593  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.497601  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:53.497610  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:53.497622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:53.548906  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:53.548948  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:53.562789  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:53.562823  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:53.635656  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:53.635679  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:53.635693  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:53.715973  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:53.716024  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:53.166007  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.166420  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.324486  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.824480  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.825127  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.326505  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:56.258726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:56.273826  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:56.273905  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:56.310881  301425 cri.go:89] found id: ""
	I0729 13:40:56.310927  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.310936  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:56.310944  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:56.310999  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:56.350104  301425 cri.go:89] found id: ""
	I0729 13:40:56.350139  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.350151  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:56.350158  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:56.350221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:56.385100  301425 cri.go:89] found id: ""
	I0729 13:40:56.385136  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.385145  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:56.385151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:56.385234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:56.421904  301425 cri.go:89] found id: ""
	I0729 13:40:56.421941  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.421953  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:56.421961  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:56.422025  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:56.457366  301425 cri.go:89] found id: ""
	I0729 13:40:56.457403  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.457414  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:56.457422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:56.457491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:56.496700  301425 cri.go:89] found id: ""
	I0729 13:40:56.496732  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.496746  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:56.496755  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:56.496844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:56.532011  301425 cri.go:89] found id: ""
	I0729 13:40:56.532039  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.532047  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:56.532053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:56.532102  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:56.567511  301425 cri.go:89] found id: ""
	I0729 13:40:56.567543  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.567554  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:56.567566  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:56.567581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:56.615875  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:56.615914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:56.629818  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:56.629862  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:56.703255  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:56.703284  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:56.703298  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:56.786466  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:56.786508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:59.328670  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:59.342993  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:59.343061  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:59.378267  301425 cri.go:89] found id: ""
	I0729 13:40:59.378301  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.378313  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:59.378321  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:59.378392  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:59.415637  301425 cri.go:89] found id: ""
	I0729 13:40:59.415669  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.415680  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:59.415687  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:59.415759  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:59.451170  301425 cri.go:89] found id: ""
	I0729 13:40:59.451204  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.451212  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:59.451219  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:59.451275  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:59.485914  301425 cri.go:89] found id: ""
	I0729 13:40:59.485948  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.485960  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:59.485975  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:59.486052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:59.523168  301425 cri.go:89] found id: ""
	I0729 13:40:59.523198  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.523208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:59.523216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:59.523274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:59.557711  301425 cri.go:89] found id: ""
	I0729 13:40:59.557746  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.557758  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:59.557766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:59.557826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:59.593387  301425 cri.go:89] found id: ""
	I0729 13:40:59.593421  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.593434  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:59.593442  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:59.593506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:59.627521  301425 cri.go:89] found id: ""
	I0729 13:40:59.627555  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.627566  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:59.627578  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:59.627597  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:59.677497  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:59.677538  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:59.692116  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:59.692150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:59.759344  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:59.759369  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:59.759382  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:59.840380  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:59.840423  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:57.166964  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:59.666395  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:01.667229  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.323708  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.323995  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.325049  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.328293  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.826414  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.380718  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:02.394436  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:02.394497  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:02.433283  301425 cri.go:89] found id: ""
	I0729 13:41:02.433313  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.433323  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:02.433332  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:02.433393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:02.467206  301425 cri.go:89] found id: ""
	I0729 13:41:02.467232  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.467241  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:02.467247  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:02.467300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:02.502743  301425 cri.go:89] found id: ""
	I0729 13:41:02.502774  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.502783  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:02.502790  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:02.502844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:02.536415  301425 cri.go:89] found id: ""
	I0729 13:41:02.536449  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.536462  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:02.536470  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:02.536527  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:02.570572  301425 cri.go:89] found id: ""
	I0729 13:41:02.570610  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.570621  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:02.570629  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:02.570702  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:02.606251  301425 cri.go:89] found id: ""
	I0729 13:41:02.606277  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.606285  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:02.606292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:02.606345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:02.644637  301425 cri.go:89] found id: ""
	I0729 13:41:02.644664  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.644675  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:02.644683  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:02.644750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:02.679493  301425 cri.go:89] found id: ""
	I0729 13:41:02.679519  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.679527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:02.679537  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:02.679553  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:02.734865  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:02.734896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:02.787929  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:02.787962  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:02.801317  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:02.801344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:02.867838  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:02.867862  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:02.867877  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:05.451323  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:05.465262  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:05.465338  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:05.499797  301425 cri.go:89] found id: ""
	I0729 13:41:05.499827  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.499837  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:05.499845  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:05.499912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:05.534363  301425 cri.go:89] found id: ""
	I0729 13:41:05.534403  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.534416  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:05.534424  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:05.534483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:05.571366  301425 cri.go:89] found id: ""
	I0729 13:41:05.571397  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.571408  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:05.571416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:05.571481  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:05.611301  301425 cri.go:89] found id: ""
	I0729 13:41:05.611335  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.611346  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:05.611355  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:05.611422  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:05.650698  301425 cri.go:89] found id: ""
	I0729 13:41:05.650738  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.650750  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:05.650758  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:05.650823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:05.686166  301425 cri.go:89] found id: ""
	I0729 13:41:05.686204  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.686216  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:05.686225  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:05.686279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:05.724567  301425 cri.go:89] found id: ""
	I0729 13:41:05.724604  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.724616  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:05.724628  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:05.724691  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:05.760401  301425 cri.go:89] found id: ""
	I0729 13:41:05.760430  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.760438  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:05.760448  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:05.760464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:05.811654  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:05.811698  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:05.827189  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:05.827226  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:05.899612  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:05.899636  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:05.899654  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:04.168533  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.665694  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:04.325443  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.824244  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.325499  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:07.326413  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.982384  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:05.982425  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.527609  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:08.542024  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:08.542086  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:08.576313  301425 cri.go:89] found id: ""
	I0729 13:41:08.576340  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.576348  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:08.576354  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:08.576406  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:08.609996  301425 cri.go:89] found id: ""
	I0729 13:41:08.610027  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.610038  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:08.610045  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:08.610111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:08.643722  301425 cri.go:89] found id: ""
	I0729 13:41:08.643750  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.643758  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:08.643765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:08.643815  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:08.679331  301425 cri.go:89] found id: ""
	I0729 13:41:08.679367  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.679378  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:08.679388  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:08.679459  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:08.718348  301425 cri.go:89] found id: ""
	I0729 13:41:08.718376  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.718384  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:08.718390  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:08.718444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:08.758086  301425 cri.go:89] found id: ""
	I0729 13:41:08.758128  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.758140  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:08.758150  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:08.758225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:08.794304  301425 cri.go:89] found id: ""
	I0729 13:41:08.794333  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.794345  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:08.794354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:08.794415  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:08.835448  301425 cri.go:89] found id: ""
	I0729 13:41:08.835477  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.835486  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:08.835495  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:08.835508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:08.923886  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:08.923931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.963921  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:08.963957  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:09.013852  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:09.013893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:09.027838  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:09.027872  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:09.097864  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:08.669271  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.165979  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:08.824724  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:10.825582  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:09.327071  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.826906  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.598762  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:11.612789  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:11.612903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:11.650029  301425 cri.go:89] found id: ""
	I0729 13:41:11.650063  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.650074  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:11.650084  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:11.650152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:11.687479  301425 cri.go:89] found id: ""
	I0729 13:41:11.687510  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.687520  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:11.687527  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:11.687593  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:11.723788  301425 cri.go:89] found id: ""
	I0729 13:41:11.723816  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.723824  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:11.723830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:11.723878  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:11.760304  301425 cri.go:89] found id: ""
	I0729 13:41:11.760341  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.760353  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:11.760361  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:11.760429  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:11.794175  301425 cri.go:89] found id: ""
	I0729 13:41:11.794202  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.794210  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:11.794216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:11.794276  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:11.830653  301425 cri.go:89] found id: ""
	I0729 13:41:11.830679  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.830689  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:11.830697  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:11.830755  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:11.869360  301425 cri.go:89] found id: ""
	I0729 13:41:11.869391  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.869403  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:11.869410  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:11.869473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:11.904164  301425 cri.go:89] found id: ""
	I0729 13:41:11.904195  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.904206  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:11.904218  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:11.904236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:11.979031  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:11.979054  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:11.979069  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:12.064215  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:12.064254  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:12.101854  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:12.101896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:12.152327  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:12.152362  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:14.668032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:14.683118  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:14.683182  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:14.722574  301425 cri.go:89] found id: ""
	I0729 13:41:14.722602  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.722612  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:14.722619  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:14.722686  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:14.759047  301425 cri.go:89] found id: ""
	I0729 13:41:14.759084  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.759094  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:14.759099  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:14.759156  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:14.794363  301425 cri.go:89] found id: ""
	I0729 13:41:14.794400  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.794411  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:14.794418  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:14.794488  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:14.831542  301425 cri.go:89] found id: ""
	I0729 13:41:14.831579  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.831586  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:14.831592  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:14.831650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:14.878710  301425 cri.go:89] found id: ""
	I0729 13:41:14.878745  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.878758  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:14.878765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:14.878824  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:14.937804  301425 cri.go:89] found id: ""
	I0729 13:41:14.937837  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.937847  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:14.937856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:14.937923  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:14.985616  301425 cri.go:89] found id: ""
	I0729 13:41:14.985649  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.985658  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:14.985665  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:14.985737  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:15.023210  301425 cri.go:89] found id: ""
	I0729 13:41:15.023248  301425 logs.go:276] 0 containers: []
	W0729 13:41:15.023261  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:15.023273  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:15.023288  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:15.072549  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:15.072587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:15.086624  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:15.086653  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:15.155391  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:15.155412  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:15.155426  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:15.237480  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:15.237535  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:13.666473  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.666831  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:13.324177  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.324419  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:14.326023  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:16.826314  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.779568  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:17.794163  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:17.794225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:17.831416  301425 cri.go:89] found id: ""
	I0729 13:41:17.831446  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.831456  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:17.831463  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:17.831519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:17.868713  301425 cri.go:89] found id: ""
	I0729 13:41:17.868740  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.868752  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:17.868758  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:17.868834  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:17.913159  301425 cri.go:89] found id: ""
	I0729 13:41:17.913200  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.913211  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:17.913221  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:17.913291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:17.947528  301425 cri.go:89] found id: ""
	I0729 13:41:17.947559  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.947567  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:17.947573  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:17.947693  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:17.982280  301425 cri.go:89] found id: ""
	I0729 13:41:17.982314  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.982323  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:17.982330  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:17.982407  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:18.023729  301425 cri.go:89] found id: ""
	I0729 13:41:18.023767  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.023776  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:18.023783  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:18.023847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:18.061594  301425 cri.go:89] found id: ""
	I0729 13:41:18.061629  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.061637  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:18.061642  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:18.061694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:18.095705  301425 cri.go:89] found id: ""
	I0729 13:41:18.095735  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.095745  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:18.095758  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:18.095778  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:18.175843  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:18.175879  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:18.222979  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:18.223015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:18.277265  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:18.277308  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:18.291002  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:18.291037  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:18.373425  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:20.873958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:20.888091  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:20.888153  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:20.925850  301425 cri.go:89] found id: ""
	I0729 13:41:20.925886  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.925894  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:20.925901  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:20.925955  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:20.962725  301425 cri.go:89] found id: ""
	I0729 13:41:20.962762  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.962774  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:20.962782  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:20.962847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:18.166668  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.166993  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.827065  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.325697  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:19.325369  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:21.326574  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.998741  301425 cri.go:89] found id: ""
	I0729 13:41:20.998778  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.998787  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:20.998794  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:20.998842  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:21.036370  301425 cri.go:89] found id: ""
	I0729 13:41:21.036401  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.036410  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:21.036417  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:21.036483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:21.071560  301425 cri.go:89] found id: ""
	I0729 13:41:21.071588  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.071597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:21.071605  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:21.071670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:21.106778  301425 cri.go:89] found id: ""
	I0729 13:41:21.106810  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.106822  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:21.106830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:21.106890  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:21.139901  301425 cri.go:89] found id: ""
	I0729 13:41:21.139926  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.139934  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:21.139940  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:21.140001  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:21.173281  301425 cri.go:89] found id: ""
	I0729 13:41:21.173312  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.173320  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:21.173330  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:21.173344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:21.225055  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:21.225095  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:21.239780  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:21.239864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:21.313460  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:21.313486  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:21.313504  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:21.398557  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:21.398599  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:23.937873  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:23.951595  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:23.951653  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:23.987177  301425 cri.go:89] found id: ""
	I0729 13:41:23.987208  301425 logs.go:276] 0 containers: []
	W0729 13:41:23.987217  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:23.987225  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:23.987324  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:24.030197  301425 cri.go:89] found id: ""
	I0729 13:41:24.030251  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.030264  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:24.030272  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:24.030339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:24.068031  301425 cri.go:89] found id: ""
	I0729 13:41:24.068061  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.068074  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:24.068081  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:24.068154  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:24.107192  301425 cri.go:89] found id: ""
	I0729 13:41:24.107221  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.107232  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:24.107239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:24.107304  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:24.143154  301425 cri.go:89] found id: ""
	I0729 13:41:24.143182  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.143190  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:24.143196  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:24.143248  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:24.181268  301425 cri.go:89] found id: ""
	I0729 13:41:24.181296  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.181304  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:24.181311  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:24.181370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:24.215248  301425 cri.go:89] found id: ""
	I0729 13:41:24.215284  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.215293  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:24.215299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:24.215363  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:24.250796  301425 cri.go:89] found id: ""
	I0729 13:41:24.250822  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.250831  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:24.250841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:24.250853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:24.305841  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:24.305883  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:24.320182  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:24.320214  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:24.389667  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:24.389690  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:24.389707  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:24.471435  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:24.471479  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:22.665718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.166432  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:22.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:24.826598  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:26.828504  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:23.825754  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.834253  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:28.329733  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:27.014508  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:27.029318  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:27.029382  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:27.064115  301425 cri.go:89] found id: ""
	I0729 13:41:27.064150  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.064161  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:27.064169  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:27.064250  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:27.099081  301425 cri.go:89] found id: ""
	I0729 13:41:27.099110  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.099123  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:27.099131  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:27.099197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:27.132475  301425 cri.go:89] found id: ""
	I0729 13:41:27.132506  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.132518  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:27.132527  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:27.132595  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:27.168924  301425 cri.go:89] found id: ""
	I0729 13:41:27.168948  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.168956  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:27.168962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:27.169015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:27.204052  301425 cri.go:89] found id: ""
	I0729 13:41:27.204082  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.204094  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:27.204109  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:27.204170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:27.238355  301425 cri.go:89] found id: ""
	I0729 13:41:27.238383  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.238391  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:27.238397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:27.238496  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:27.276104  301425 cri.go:89] found id: ""
	I0729 13:41:27.276139  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.276150  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:27.276157  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:27.276222  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:27.308612  301425 cri.go:89] found id: ""
	I0729 13:41:27.308643  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.308654  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:27.308667  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:27.308683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:27.362472  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:27.362511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:27.376349  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:27.376383  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:27.458450  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:27.458472  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:27.458486  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:27.536405  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:27.536445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:30.076285  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:30.091308  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:30.091386  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:30.138335  301425 cri.go:89] found id: ""
	I0729 13:41:30.138369  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.138381  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:30.138389  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:30.138454  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:30.176395  301425 cri.go:89] found id: ""
	I0729 13:41:30.176425  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.176435  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:30.176443  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:30.176495  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:30.214990  301425 cri.go:89] found id: ""
	I0729 13:41:30.215027  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.215035  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:30.215041  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:30.215090  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:30.252051  301425 cri.go:89] found id: ""
	I0729 13:41:30.252080  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.252088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:30.252094  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:30.252155  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:30.287210  301425 cri.go:89] found id: ""
	I0729 13:41:30.287240  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.287249  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:30.287254  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:30.287337  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:30.322813  301425 cri.go:89] found id: ""
	I0729 13:41:30.322842  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.322851  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:30.322857  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:30.322924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:30.358697  301425 cri.go:89] found id: ""
	I0729 13:41:30.358730  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.358738  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:30.358744  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:30.358804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:30.394252  301425 cri.go:89] found id: ""
	I0729 13:41:30.394283  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.394294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:30.394305  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:30.394321  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:30.446777  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:30.446820  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:30.461564  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:30.461605  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:30.537918  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:30.537942  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:30.537958  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:30.613821  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:30.613865  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:27.167654  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.666133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.323396  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:31.324718  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:30.825879  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:32.826458  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.154081  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:33.168252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:33.168353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:33.205675  301425 cri.go:89] found id: ""
	I0729 13:41:33.205708  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.205719  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:33.205727  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:33.205799  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:33.240556  301425 cri.go:89] found id: ""
	I0729 13:41:33.240582  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.240590  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:33.240596  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:33.240644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:33.276662  301425 cri.go:89] found id: ""
	I0729 13:41:33.276690  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.276698  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:33.276704  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:33.276773  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:33.318631  301425 cri.go:89] found id: ""
	I0729 13:41:33.318667  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.318677  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:33.318685  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:33.318762  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:33.354372  301425 cri.go:89] found id: ""
	I0729 13:41:33.354403  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.354412  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:33.354421  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:33.354475  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:33.389309  301425 cri.go:89] found id: ""
	I0729 13:41:33.389337  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.389346  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:33.389352  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:33.389404  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:33.423689  301425 cri.go:89] found id: ""
	I0729 13:41:33.423732  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.423745  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:33.423753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:33.423823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:33.457556  301425 cri.go:89] found id: ""
	I0729 13:41:33.457593  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.457605  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:33.457618  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:33.457634  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:33.534377  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:33.534416  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:33.579646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:33.579689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:33.629784  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:33.629819  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:33.643878  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:33.643912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:33.716446  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:32.167152  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:34.666054  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.667479  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.823726  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.824199  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.324827  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.325672  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.216598  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:36.229904  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:36.230003  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:36.263721  301425 cri.go:89] found id: ""
	I0729 13:41:36.263752  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.263771  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:36.263786  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:36.263838  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:36.297900  301425 cri.go:89] found id: ""
	I0729 13:41:36.297932  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.297950  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:36.297958  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:36.298023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:36.338037  301425 cri.go:89] found id: ""
	I0729 13:41:36.338064  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.338072  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:36.338078  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:36.338125  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:36.375334  301425 cri.go:89] found id: ""
	I0729 13:41:36.375362  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.375370  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:36.375375  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:36.375426  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:36.410760  301425 cri.go:89] found id: ""
	I0729 13:41:36.410794  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.410805  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:36.410813  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:36.410888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:36.445247  301425 cri.go:89] found id: ""
	I0729 13:41:36.445280  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.445291  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:36.445300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:36.445364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:36.487183  301425 cri.go:89] found id: ""
	I0729 13:41:36.487214  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.487221  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:36.487228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:36.487301  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:36.522407  301425 cri.go:89] found id: ""
	I0729 13:41:36.522433  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.522442  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:36.522453  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:36.522468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:36.537163  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:36.537197  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:36.608334  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:36.608361  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:36.608376  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:36.689026  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:36.689074  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:36.728580  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:36.728618  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.279605  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:39.293259  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:39.293320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:39.329070  301425 cri.go:89] found id: ""
	I0729 13:41:39.329095  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.329103  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:39.329109  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:39.329160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:39.362992  301425 cri.go:89] found id: ""
	I0729 13:41:39.363023  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.363032  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:39.363038  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:39.363100  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:39.403094  301425 cri.go:89] found id: ""
	I0729 13:41:39.403128  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.403140  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:39.403147  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:39.403201  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:39.435761  301425 cri.go:89] found id: ""
	I0729 13:41:39.435795  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.435806  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:39.435814  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:39.435881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:39.468299  301425 cri.go:89] found id: ""
	I0729 13:41:39.468332  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.468341  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:39.468349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:39.468417  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:39.505114  301425 cri.go:89] found id: ""
	I0729 13:41:39.505149  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.505162  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:39.505172  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:39.505234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:39.536942  301425 cri.go:89] found id: ""
	I0729 13:41:39.536975  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.536986  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:39.536994  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:39.537064  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:39.577394  301425 cri.go:89] found id: ""
	I0729 13:41:39.577427  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.577439  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:39.577451  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:39.577468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.631143  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:39.631184  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:39.645020  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:39.645047  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:39.718256  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:39.718283  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:39.718297  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:39.801990  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:39.802036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:39.166762  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.167646  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.824966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.825836  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.324009  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.327169  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.826091  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.347066  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:42.359902  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:42.359983  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:42.395494  301425 cri.go:89] found id: ""
	I0729 13:41:42.395529  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.395540  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:42.395548  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:42.395611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:42.429305  301425 cri.go:89] found id: ""
	I0729 13:41:42.429334  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.429343  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:42.429350  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:42.429401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:42.466902  301425 cri.go:89] found id: ""
	I0729 13:41:42.466931  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.466942  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:42.466949  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:42.467017  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:42.504582  301425 cri.go:89] found id: ""
	I0729 13:41:42.504618  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.504628  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:42.504652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:42.504717  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:42.539649  301425 cri.go:89] found id: ""
	I0729 13:41:42.539676  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.539686  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:42.539695  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:42.539758  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:42.579209  301425 cri.go:89] found id: ""
	I0729 13:41:42.579238  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.579249  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:42.579257  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:42.579320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:42.614832  301425 cri.go:89] found id: ""
	I0729 13:41:42.614861  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.614869  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:42.614874  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:42.614925  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:42.651837  301425 cri.go:89] found id: ""
	I0729 13:41:42.651865  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.651873  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:42.651883  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:42.651899  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:42.707149  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:42.707190  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:42.720990  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:42.721043  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:42.789818  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:42.789849  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:42.789867  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:42.871880  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:42.871934  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.416172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:45.428923  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:45.428994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:45.466667  301425 cri.go:89] found id: ""
	I0729 13:41:45.466699  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.466710  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:45.466717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:45.466783  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:45.501779  301425 cri.go:89] found id: ""
	I0729 13:41:45.501813  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.501825  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:45.501832  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:45.501896  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:45.537507  301425 cri.go:89] found id: ""
	I0729 13:41:45.537537  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.537547  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:45.537554  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:45.537619  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:45.575430  301425 cri.go:89] found id: ""
	I0729 13:41:45.575460  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.575467  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:45.575474  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:45.575523  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:45.613009  301425 cri.go:89] found id: ""
	I0729 13:41:45.613038  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.613047  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:45.613053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:45.613103  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:45.650734  301425 cri.go:89] found id: ""
	I0729 13:41:45.650767  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.650778  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:45.650786  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:45.650853  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:45.684301  301425 cri.go:89] found id: ""
	I0729 13:41:45.684332  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.684341  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:45.684349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:45.684416  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:45.719861  301425 cri.go:89] found id: ""
	I0729 13:41:45.719901  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.719911  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:45.719921  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:45.719936  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:45.800422  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:45.800464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.842460  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:45.842493  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:45.897388  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:45.897430  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:45.911554  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:45.911587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:41:43.665771  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.666196  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:44.325813  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:46.824774  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:43.828518  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.830106  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:48.325196  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	W0729 13:41:45.984435  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.485014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:48.498038  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:48.498110  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:48.534248  301425 cri.go:89] found id: ""
	I0729 13:41:48.534280  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.534291  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:48.534299  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:48.534362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:48.572411  301425 cri.go:89] found id: ""
	I0729 13:41:48.572445  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.572457  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:48.572465  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:48.572524  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:48.612345  301425 cri.go:89] found id: ""
	I0729 13:41:48.612373  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.612381  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:48.612387  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:48.612450  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:48.650334  301425 cri.go:89] found id: ""
	I0729 13:41:48.650385  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.650395  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:48.650401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:48.650466  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:48.687460  301425 cri.go:89] found id: ""
	I0729 13:41:48.687490  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.687501  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:48.687508  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:48.687572  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:48.735028  301425 cri.go:89] found id: ""
	I0729 13:41:48.735064  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.735077  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:48.735085  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:48.735142  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:48.771175  301425 cri.go:89] found id: ""
	I0729 13:41:48.771209  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.771220  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:48.771228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:48.771300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:48.808267  301425 cri.go:89] found id: ""
	I0729 13:41:48.808295  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.808304  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:48.808314  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:48.808328  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:48.850520  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:48.850557  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:48.902563  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:48.902612  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:48.919082  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:48.919114  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:48.999185  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.999213  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:48.999241  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:48.166020  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:49.323402  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.326596  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.825399  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:52.831823  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.579922  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:51.593149  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:51.593213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:51.626302  301425 cri.go:89] found id: ""
	I0729 13:41:51.626330  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.626338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:51.626344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:51.626393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:51.659551  301425 cri.go:89] found id: ""
	I0729 13:41:51.659578  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.659586  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:51.659592  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:51.659642  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:51.696842  301425 cri.go:89] found id: ""
	I0729 13:41:51.696868  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.696876  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:51.696882  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:51.696937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:51.737209  301425 cri.go:89] found id: ""
	I0729 13:41:51.737237  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.737246  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:51.737253  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:51.737317  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:51.772782  301425 cri.go:89] found id: ""
	I0729 13:41:51.772829  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.772842  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:51.772850  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:51.772921  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:51.806649  301425 cri.go:89] found id: ""
	I0729 13:41:51.806679  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.806690  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:51.806698  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:51.806771  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:51.848950  301425 cri.go:89] found id: ""
	I0729 13:41:51.848978  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.848989  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:51.848997  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:51.849065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:51.884875  301425 cri.go:89] found id: ""
	I0729 13:41:51.884902  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.884910  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:51.884920  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:51.884932  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:51.964282  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:51.964322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:52.004218  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:52.004251  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:52.056230  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:52.056266  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.069591  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:52.069622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:52.142552  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:54.643154  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:54.657199  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:54.657259  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:54.694124  301425 cri.go:89] found id: ""
	I0729 13:41:54.694152  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.694159  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:54.694165  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:54.694221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:54.732072  301425 cri.go:89] found id: ""
	I0729 13:41:54.732109  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.732119  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:54.732127  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:54.732194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:54.768257  301425 cri.go:89] found id: ""
	I0729 13:41:54.768294  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.768306  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:54.768314  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:54.768383  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:54.807596  301425 cri.go:89] found id: ""
	I0729 13:41:54.807631  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.807643  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:54.807651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:54.807716  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:54.845107  301425 cri.go:89] found id: ""
	I0729 13:41:54.845134  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.845142  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:54.845148  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:54.845197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:54.880627  301425 cri.go:89] found id: ""
	I0729 13:41:54.880655  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.880667  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:54.880675  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:54.880750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:54.918122  301425 cri.go:89] found id: ""
	I0729 13:41:54.918151  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.918159  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:54.918165  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:54.918219  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:54.956943  301425 cri.go:89] found id: ""
	I0729 13:41:54.956986  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.956999  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:54.957022  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:54.957036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:55.032512  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:55.032547  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:55.032564  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:55.116653  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:55.116699  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:55.177030  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:55.177059  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:55.238789  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:55.238831  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.166339  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:54.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:53.824694  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:56.324761  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:55.324698  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.326135  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.753504  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:57.766354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:57.766436  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:57.802691  301425 cri.go:89] found id: ""
	I0729 13:41:57.802728  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.802740  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:57.802746  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:57.802807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:57.839800  301425 cri.go:89] found id: ""
	I0729 13:41:57.839823  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.839830  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:57.839846  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:57.839902  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:57.881592  301425 cri.go:89] found id: ""
	I0729 13:41:57.881617  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.881625  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:57.881631  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:57.881681  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.916245  301425 cri.go:89] found id: ""
	I0729 13:41:57.916273  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.916282  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:57.916290  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:57.916346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:57.952224  301425 cri.go:89] found id: ""
	I0729 13:41:57.952261  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.952272  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:57.952280  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:57.952340  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:57.985508  301425 cri.go:89] found id: ""
	I0729 13:41:57.985537  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.985548  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:57.985557  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:57.985624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:58.022354  301425 cri.go:89] found id: ""
	I0729 13:41:58.022382  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.022391  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:58.022397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:58.022462  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:58.055865  301425 cri.go:89] found id: ""
	I0729 13:41:58.055891  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.055900  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:58.055914  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:58.055931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:58.069143  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:58.069177  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:58.143137  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:58.143164  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:58.143183  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:58.224631  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:58.224672  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:58.266437  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:58.266470  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:00.819300  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:00.834195  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:00.834258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:00.869660  301425 cri.go:89] found id: ""
	I0729 13:42:00.869697  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.869709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:00.869717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:00.869777  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:00.915601  301425 cri.go:89] found id: ""
	I0729 13:42:00.915630  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.915638  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:00.915644  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:00.915694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:00.956981  301425 cri.go:89] found id: ""
	I0729 13:42:00.957020  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.957028  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:00.957034  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:00.957094  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.166038  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.666455  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.666824  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:58.824729  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.825513  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.825074  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.826480  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.995761  301425 cri.go:89] found id: ""
	I0729 13:42:00.995793  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.995801  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:00.995817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:00.995869  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:01.047668  301425 cri.go:89] found id: ""
	I0729 13:42:01.047699  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.047707  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:01.047713  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:01.047787  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:01.085178  301425 cri.go:89] found id: ""
	I0729 13:42:01.085209  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.085217  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:01.085224  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:01.085278  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:01.125282  301425 cri.go:89] found id: ""
	I0729 13:42:01.125310  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.125320  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:01.125329  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:01.125396  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:01.165972  301425 cri.go:89] found id: ""
	I0729 13:42:01.166005  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.166021  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:01.166033  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:01.166049  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:01.236500  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:01.236523  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:01.236540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:01.320918  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:01.320959  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:01.366975  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:01.367015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:01.420347  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:01.420389  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:03.936048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:03.949603  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:03.949679  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:03.987529  301425 cri.go:89] found id: ""
	I0729 13:42:03.987557  301425 logs.go:276] 0 containers: []
	W0729 13:42:03.987567  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:03.987574  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:03.987639  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:04.027325  301425 cri.go:89] found id: ""
	I0729 13:42:04.027355  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.027365  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:04.027372  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:04.027437  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:04.063019  301425 cri.go:89] found id: ""
	I0729 13:42:04.063050  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.063059  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:04.063065  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:04.063117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:04.101106  301425 cri.go:89] found id: ""
	I0729 13:42:04.101135  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.101146  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:04.101153  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:04.101242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:04.137186  301425 cri.go:89] found id: ""
	I0729 13:42:04.137219  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.137230  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:04.137238  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:04.137302  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:04.175732  301425 cri.go:89] found id: ""
	I0729 13:42:04.175761  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.175770  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:04.175776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:04.175826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:04.213265  301425 cri.go:89] found id: ""
	I0729 13:42:04.213296  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.213307  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:04.213315  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:04.213381  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:04.248581  301425 cri.go:89] found id: ""
	I0729 13:42:04.248609  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.248617  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:04.248627  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:04.248643  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:04.303277  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:04.303400  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:04.317518  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:04.317547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:04.385209  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:04.385229  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:04.385242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:04.470629  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:04.470680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:04.167299  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.168006  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.324087  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:05.324904  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.826588  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.325326  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:08.326125  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.012455  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:07.028535  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:07.028621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:07.063453  301425 cri.go:89] found id: ""
	I0729 13:42:07.063496  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.063505  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:07.063511  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:07.063582  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:07.098243  301425 cri.go:89] found id: ""
	I0729 13:42:07.098274  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.098284  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:07.098291  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:07.098357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:07.138122  301425 cri.go:89] found id: ""
	I0729 13:42:07.138149  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.138157  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:07.138162  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:07.138213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:07.176772  301425 cri.go:89] found id: ""
	I0729 13:42:07.176814  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.176826  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:07.176835  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:07.176894  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:07.214867  301425 cri.go:89] found id: ""
	I0729 13:42:07.214898  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.214914  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:07.214920  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:07.214979  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:07.253443  301425 cri.go:89] found id: ""
	I0729 13:42:07.253471  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.253481  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:07.253490  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:07.253550  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:07.287284  301425 cri.go:89] found id: ""
	I0729 13:42:07.287326  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.287338  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:07.287349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:07.287411  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:07.330550  301425 cri.go:89] found id: ""
	I0729 13:42:07.330577  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.330588  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:07.330599  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:07.330620  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:07.384226  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:07.384268  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:07.398790  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:07.398817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:07.462868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:07.462893  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:07.462914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:07.538665  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:07.538706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.078452  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:10.091962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:10.092027  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:10.127401  301425 cri.go:89] found id: ""
	I0729 13:42:10.127434  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.127445  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:10.127454  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:10.127531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:10.161088  301425 cri.go:89] found id: ""
	I0729 13:42:10.161117  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.161127  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:10.161134  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:10.161187  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:10.199721  301425 cri.go:89] found id: ""
	I0729 13:42:10.199751  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.199763  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:10.199769  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:10.199821  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:10.237067  301425 cri.go:89] found id: ""
	I0729 13:42:10.237106  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.237120  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:10.237127  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:10.237191  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:10.275863  301425 cri.go:89] found id: ""
	I0729 13:42:10.275894  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.275909  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:10.275918  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:10.275981  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:10.313234  301425 cri.go:89] found id: ""
	I0729 13:42:10.313262  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.313270  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:10.313276  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:10.313334  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:10.353530  301425 cri.go:89] found id: ""
	I0729 13:42:10.353558  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.353569  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:10.353576  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:10.353644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:10.389488  301425 cri.go:89] found id: ""
	I0729 13:42:10.389516  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.389527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:10.389539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:10.389562  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.428705  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:10.428740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:10.484413  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:10.484456  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:10.499203  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:10.499248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:10.570868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:10.570894  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:10.570907  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:08.667158  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:11.166721  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.825638  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.324753  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.326752  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:12.826001  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:13.151788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:13.165297  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:13.165367  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:13.203752  301425 cri.go:89] found id: ""
	I0729 13:42:13.203786  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.203798  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:13.203805  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:13.203874  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:13.240454  301425 cri.go:89] found id: ""
	I0729 13:42:13.240491  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.240499  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:13.240504  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:13.240556  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:13.276508  301425 cri.go:89] found id: ""
	I0729 13:42:13.276536  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.276545  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:13.276553  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:13.276617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:13.311252  301425 cri.go:89] found id: ""
	I0729 13:42:13.311280  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.311291  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:13.311299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:13.311353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:13.351777  301425 cri.go:89] found id: ""
	I0729 13:42:13.351808  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.351817  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:13.351823  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:13.351881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:13.389020  301425 cri.go:89] found id: ""
	I0729 13:42:13.389049  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.389058  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:13.389064  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:13.389126  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:13.424353  301425 cri.go:89] found id: ""
	I0729 13:42:13.424387  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.424395  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:13.424401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:13.424451  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:13.460755  301425 cri.go:89] found id: ""
	I0729 13:42:13.460788  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.460817  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:13.460830  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:13.460850  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:13.500201  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:13.500234  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:13.553319  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:13.553357  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:13.567496  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:13.567529  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:13.644662  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:13.644686  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:13.644700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:13.667287  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.160289  301044 pod_ready.go:81] duration metric: took 4m0.000442608s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:16.160321  301044 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 13:42:16.160342  301044 pod_ready.go:38] duration metric: took 4m7.984743222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:16.160378  301044 kubeadm.go:597] duration metric: took 4m16.091281244s to restartPrimaryControlPlane
	W0729 13:42:16.160459  301044 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:16.160486  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:12.825387  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.826853  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.827679  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.829149  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326337  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326370  300746 pod_ready.go:81] duration metric: took 4m0.007721109s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:17.326383  300746 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:42:17.326392  300746 pod_ready.go:38] duration metric: took 4m8.417741792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:17.326410  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:42:17.326446  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:17.326514  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:17.373993  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.374027  300746 cri.go:89] found id: ""
	I0729 13:42:17.374037  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:17.374118  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.384841  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:17.384929  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:17.422219  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.422253  300746 cri.go:89] found id: ""
	I0729 13:42:17.422263  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:17.422349  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.427319  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:17.427385  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:17.469310  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:17.469336  300746 cri.go:89] found id: ""
	I0729 13:42:17.469347  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:17.469412  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.474501  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:17.474590  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:17.520767  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:17.520808  300746 cri.go:89] found id: ""
	I0729 13:42:17.520818  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:17.520881  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.525543  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:17.525643  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:17.572718  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.572749  300746 cri.go:89] found id: ""
	I0729 13:42:17.572758  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:17.572839  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.577227  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:17.577304  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:17.614076  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.614098  300746 cri.go:89] found id: ""
	I0729 13:42:17.614106  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:17.614153  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.618404  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:17.618479  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:17.666242  300746 cri.go:89] found id: ""
	I0729 13:42:17.666275  300746 logs.go:276] 0 containers: []
	W0729 13:42:17.666285  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:17.666301  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:17.666373  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:17.713379  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:17.713411  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:17.713418  300746 cri.go:89] found id: ""
	I0729 13:42:17.713428  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:17.713493  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.719026  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.723948  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:17.723974  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:17.743561  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:17.743607  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.803393  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:17.803425  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.855689  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:17.855723  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.898327  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:17.898361  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.951024  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:17.951060  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:18.014040  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:18.014082  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:18.159937  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:18.159984  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:18.201626  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:18.201667  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:18.247168  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:18.247211  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:18.291431  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:18.291469  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:18.333636  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:18.333671  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.226602  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:16.242934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:16.243005  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:16.284033  301425 cri.go:89] found id: ""
	I0729 13:42:16.284064  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.284075  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:16.284083  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:16.284152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:16.328362  301425 cri.go:89] found id: ""
	I0729 13:42:16.328388  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.328396  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:16.328402  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:16.328464  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:16.372664  301425 cri.go:89] found id: ""
	I0729 13:42:16.372701  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.372712  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:16.372727  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:16.372818  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:16.416085  301425 cri.go:89] found id: ""
	I0729 13:42:16.416119  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.416130  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:16.416138  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:16.416194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:16.457786  301425 cri.go:89] found id: ""
	I0729 13:42:16.457819  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.457830  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:16.457838  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:16.457903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:16.498929  301425 cri.go:89] found id: ""
	I0729 13:42:16.498962  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.498971  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:16.498979  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:16.499043  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:16.546159  301425 cri.go:89] found id: ""
	I0729 13:42:16.546187  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.546199  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:16.546207  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:16.546270  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:16.585010  301425 cri.go:89] found id: ""
	I0729 13:42:16.585041  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.585052  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:16.585065  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:16.585081  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:16.639033  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:16.639079  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:16.656209  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:16.656242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:16.734835  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:16.734863  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:16.734940  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.818756  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:16.818798  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.370796  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:19.384267  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:19.384354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:19.425595  301425 cri.go:89] found id: ""
	I0729 13:42:19.425629  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.425641  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:19.425650  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:19.425715  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:19.461470  301425 cri.go:89] found id: ""
	I0729 13:42:19.461506  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.461517  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:19.461524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:19.461592  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:19.508232  301425 cri.go:89] found id: ""
	I0729 13:42:19.508265  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.508275  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:19.508283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:19.508360  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:19.546226  301425 cri.go:89] found id: ""
	I0729 13:42:19.546259  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.546275  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:19.546283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:19.546354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:19.581125  301425 cri.go:89] found id: ""
	I0729 13:42:19.581156  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.581167  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:19.581176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:19.581242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:19.619680  301425 cri.go:89] found id: ""
	I0729 13:42:19.619719  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.619728  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:19.619736  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:19.619800  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:19.657096  301425 cri.go:89] found id: ""
	I0729 13:42:19.657126  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.657136  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:19.657142  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:19.657203  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:19.697247  301425 cri.go:89] found id: ""
	I0729 13:42:19.697277  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.697286  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:19.697297  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:19.697312  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:19.714900  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:19.714935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:19.794118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:19.794145  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:19.794161  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:19.907077  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:19.907122  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.949841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:19.949871  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:19.324474  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:21.826117  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:18.858720  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:18.858773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:21.419344  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:21.440121  300746 api_server.go:72] duration metric: took 4m17.790553991s to wait for apiserver process to appear ...
	I0729 13:42:21.440149  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:42:21.440190  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:21.440242  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:21.485874  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:21.485897  300746 cri.go:89] found id: ""
	I0729 13:42:21.485905  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:21.485956  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.490424  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:21.490493  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:21.532174  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:21.532202  300746 cri.go:89] found id: ""
	I0729 13:42:21.532211  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:21.532259  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.536561  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:21.536622  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:21.579375  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:21.579397  300746 cri.go:89] found id: ""
	I0729 13:42:21.579404  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:21.579450  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.584710  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:21.584779  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:21.621437  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.621465  300746 cri.go:89] found id: ""
	I0729 13:42:21.621475  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:21.621536  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.625829  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:21.625898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:21.666063  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:21.666086  300746 cri.go:89] found id: ""
	I0729 13:42:21.666095  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:21.666162  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.670822  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:21.670898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:21.713993  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:21.714022  300746 cri.go:89] found id: ""
	I0729 13:42:21.714032  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:21.714099  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.718967  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:21.719044  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:21.761282  300746 cri.go:89] found id: ""
	I0729 13:42:21.761312  300746 logs.go:276] 0 containers: []
	W0729 13:42:21.761320  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:21.761327  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:21.761390  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:21.810085  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:21.810114  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:21.810121  300746 cri.go:89] found id: ""
	I0729 13:42:21.810130  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:21.810185  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.814713  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.819968  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:21.819996  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:21.834798  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:21.834823  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:21.957963  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:21.958000  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.995345  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:21.995376  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:22.037737  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:22.037773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:22.074774  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:22.074813  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:22.123172  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.123205  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.181432  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:22.181473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:22.237128  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:22.237162  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:22.285733  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:22.285766  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:22.328258  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:22.328291  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:22.381239  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.381276  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:22.840466  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:22.840504  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:22.515296  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:22.529187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:22.529286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:22.573033  301425 cri.go:89] found id: ""
	I0729 13:42:22.573070  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.573082  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:22.573091  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:22.573152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:22.608443  301425 cri.go:89] found id: ""
	I0729 13:42:22.608476  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.608489  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:22.608496  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:22.608566  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:22.641672  301425 cri.go:89] found id: ""
	I0729 13:42:22.641704  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.641716  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:22.641724  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:22.641781  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:22.673902  301425 cri.go:89] found id: ""
	I0729 13:42:22.673934  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.673944  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:22.673952  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:22.674012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:22.715131  301425 cri.go:89] found id: ""
	I0729 13:42:22.715165  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.715179  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:22.715187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:22.715251  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:22.748807  301425 cri.go:89] found id: ""
	I0729 13:42:22.748838  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.748848  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:22.748856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:22.748924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:22.781972  301425 cri.go:89] found id: ""
	I0729 13:42:22.782002  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.782012  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:22.782021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:22.782088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:22.815791  301425 cri.go:89] found id: ""
	I0729 13:42:22.815823  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.815834  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:22.815848  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.815864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.873595  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:22.873631  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:22.888081  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:22.888123  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:22.959873  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:22.959899  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.959912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:23.040996  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:23.041035  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:25.585159  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:25.604154  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.604240  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.645428  301425 cri.go:89] found id: ""
	I0729 13:42:25.645459  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.645466  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:25.645474  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.645534  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.682758  301425 cri.go:89] found id: ""
	I0729 13:42:25.682785  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.682793  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:25.682799  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.682864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.724297  301425 cri.go:89] found id: ""
	I0729 13:42:25.724330  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.724341  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:25.724349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.724401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.761124  301425 cri.go:89] found id: ""
	I0729 13:42:25.761157  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.761168  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:25.761177  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.761229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.802698  301425 cri.go:89] found id: ""
	I0729 13:42:25.802728  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.802741  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:25.802750  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.802804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.840472  301425 cri.go:89] found id: ""
	I0729 13:42:25.840499  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.840509  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:25.840516  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.840586  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.875217  301425 cri.go:89] found id: ""
	I0729 13:42:25.875255  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.875267  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.875273  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:25.875345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:25.919895  301425 cri.go:89] found id: ""
	I0729 13:42:25.919937  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.919948  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:25.919963  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.919988  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:24.324138  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:26.324843  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:25.399606  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:42:25.405339  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:42:25.406585  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:42:25.406607  300746 api_server.go:131] duration metric: took 3.966451518s to wait for apiserver health ...
	I0729 13:42:25.406615  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:42:25.406640  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.406686  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.442039  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:25.442068  300746 cri.go:89] found id: ""
	I0729 13:42:25.442079  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:25.442140  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.446769  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.446830  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.482122  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:25.482144  300746 cri.go:89] found id: ""
	I0729 13:42:25.482156  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:25.482211  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.486666  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.486729  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.534553  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:25.534584  300746 cri.go:89] found id: ""
	I0729 13:42:25.534595  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:25.534657  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.539546  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.539624  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.577538  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.577562  300746 cri.go:89] found id: ""
	I0729 13:42:25.577572  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:25.577635  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.582377  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.582457  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.628918  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:25.628945  300746 cri.go:89] found id: ""
	I0729 13:42:25.628955  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:25.629027  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.633502  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.633592  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.673133  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.673156  300746 cri.go:89] found id: ""
	I0729 13:42:25.673163  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:25.673210  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.677905  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.677994  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.724757  300746 cri.go:89] found id: ""
	I0729 13:42:25.724780  300746 logs.go:276] 0 containers: []
	W0729 13:42:25.724805  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.724813  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:25.724887  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:25.775101  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.775130  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:25.775136  300746 cri.go:89] found id: ""
	I0729 13:42:25.775144  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:25.775219  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.782008  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.787032  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:25.787064  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.834985  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:25.835026  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.897295  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:25.897338  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.938020  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.938053  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:26.002775  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:26.002808  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:26.021431  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:26.021473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:26.071861  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:26.071898  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:26.130018  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:26.130057  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:26.170233  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:26.170290  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:26.207687  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.207718  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.600518  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:26.600575  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:26.707024  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:26.707074  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:26.753205  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.753240  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:29.302597  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:42:29.302626  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.302630  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.302634  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.302638  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.302641  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.302644  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.302649  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.302654  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.302661  300746 system_pods.go:74] duration metric: took 3.896040202s to wait for pod list to return data ...
	I0729 13:42:29.302670  300746 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:42:29.305640  300746 default_sa.go:45] found service account: "default"
	I0729 13:42:29.305668  300746 default_sa.go:55] duration metric: took 2.989028ms for default service account to be created ...
	I0729 13:42:29.305679  300746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:42:29.310472  300746 system_pods.go:86] 8 kube-system pods found
	I0729 13:42:29.310495  300746 system_pods.go:89] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.310500  300746 system_pods.go:89] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.310505  300746 system_pods.go:89] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.310509  300746 system_pods.go:89] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.310513  300746 system_pods.go:89] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.310517  300746 system_pods.go:89] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.310523  300746 system_pods.go:89] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.310528  300746 system_pods.go:89] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.310536  300746 system_pods.go:126] duration metric: took 4.851477ms to wait for k8s-apps to be running ...
	I0729 13:42:29.310545  300746 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:42:29.310580  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.329123  300746 system_svc.go:56] duration metric: took 18.569258ms WaitForService to wait for kubelet
	I0729 13:42:29.329155  300746 kubeadm.go:582] duration metric: took 4m25.679589837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:42:29.329182  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:42:29.332696  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:42:29.332726  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:42:29.332741  300746 node_conditions.go:105] duration metric: took 3.551684ms to run NodePressure ...
	I0729 13:42:29.332756  300746 start.go:241] waiting for startup goroutines ...
	I0729 13:42:29.332770  300746 start.go:246] waiting for cluster config update ...
	I0729 13:42:29.332784  300746 start.go:255] writing updated cluster config ...
	I0729 13:42:29.333168  300746 ssh_runner.go:195] Run: rm -f paused
	I0729 13:42:29.394738  300746 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 13:42:29.396826  300746 out.go:177] * Done! kubectl is now configured to use "no-preload-566777" cluster and "default" namespace by default
	I0729 13:42:25.981964  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:25.982005  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:25.997546  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:25.997576  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:26.075879  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:26.075901  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.075917  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.158552  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.158593  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:28.704328  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:28.718946  301425 kubeadm.go:597] duration metric: took 4m3.546660825s to restartPrimaryControlPlane
	W0729 13:42:28.719041  301425 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:28.719086  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:29.251866  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.267009  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:29.277498  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:29.287980  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:29.288003  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:29.288054  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:42:29.297830  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:29.297890  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:29.308263  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:42:29.318332  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:29.318388  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:29.328684  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.339841  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:29.339894  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.351304  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:42:29.363901  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:29.363960  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:29.377255  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:29.453113  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:42:29.453212  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:29.609835  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:29.609970  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:29.610106  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:29.812529  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:29.814455  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:29.814551  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:29.814633  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:29.814727  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:29.814799  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:29.814915  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:29.814979  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:29.815695  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:29.816098  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:29.816602  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:29.817114  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:29.817184  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:29.817266  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:30.122967  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:30.287162  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:30.336346  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:30.516317  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:30.532829  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:30.533732  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:30.533809  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:30.672345  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:30.674334  301425 out.go:204]   - Booting up control plane ...
	I0729 13:42:30.674492  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:30.681661  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:30.681784  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:30.683350  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:30.687290  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:42:28.327998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:30.823998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:32.824105  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:34.825475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:37.324435  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:39.824490  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:42.323305  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:44.329376  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:46.823645  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:47.980926  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.820407091s)
	I0729 13:42:47.981010  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:47.997344  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:48.007813  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:48.017519  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:48.017538  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:48.017579  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:42:48.028739  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:48.028819  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:48.038417  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:42:48.047864  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:48.047921  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:48.057408  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.066977  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:48.067040  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.077017  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:42:48.087204  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:48.087267  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:48.097659  301044 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:48.149712  301044 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:42:48.149883  301044 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:48.277280  301044 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:48.277441  301044 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:48.277578  301044 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:48.505523  301044 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:48.507718  301044 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:48.507827  301044 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:48.507941  301044 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:48.508049  301044 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:48.508139  301044 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:48.508245  301044 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:48.508334  301044 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:48.508431  301044 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:48.508518  301044 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:48.508622  301044 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:48.508740  301044 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:48.508824  301044 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:48.508949  301044 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:48.545220  301044 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:48.620528  301044 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:42:48.781015  301044 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:49.039301  301044 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:49.104540  301044 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:49.105022  301044 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:49.107524  301044 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:49.109579  301044 out.go:204]   - Booting up control plane ...
	I0729 13:42:49.109698  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:49.109836  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:49.109924  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:49.129789  301044 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:49.130766  301044 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:49.130844  301044 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:49.272901  301044 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:42:49.273017  301044 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:42:50.274804  301044 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001903151s
	I0729 13:42:50.274906  301044 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:42:48.825621  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:51.324025  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.276427  301044 kubeadm.go:310] [api-check] The API server is healthy after 5.001280529s
	I0729 13:42:55.289666  301044 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:42:55.309747  301044 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:42:55.343304  301044 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:42:55.343537  301044 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-972693 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:42:55.366319  301044 kubeadm.go:310] [bootstrap-token] Using token: bvsox4.ktqddck1jfi3aduz
	I0729 13:42:55.367592  301044 out.go:204]   - Configuring RBAC rules ...
	I0729 13:42:55.367695  301044 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:42:55.380118  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:42:55.393704  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:42:55.397859  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:42:55.401567  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:42:55.407851  301044 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:42:55.684714  301044 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:42:56.128597  301044 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:42:56.683879  301044 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:42:56.685050  301044 kubeadm.go:310] 
	I0729 13:42:56.685127  301044 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:42:56.685137  301044 kubeadm.go:310] 
	I0729 13:42:56.685216  301044 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:42:56.685226  301044 kubeadm.go:310] 
	I0729 13:42:56.685252  301044 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:42:56.685335  301044 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:42:56.685414  301044 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:42:56.685422  301044 kubeadm.go:310] 
	I0729 13:42:56.685527  301044 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:42:56.685550  301044 kubeadm.go:310] 
	I0729 13:42:56.685607  301044 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:42:56.685617  301044 kubeadm.go:310] 
	I0729 13:42:56.685684  301044 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:42:56.685800  301044 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:42:56.685916  301044 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:42:56.685933  301044 kubeadm.go:310] 
	I0729 13:42:56.686048  301044 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:42:56.686149  301044 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:42:56.686162  301044 kubeadm.go:310] 
	I0729 13:42:56.686277  301044 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686416  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 \
	I0729 13:42:56.686449  301044 kubeadm.go:310] 	--control-plane 
	I0729 13:42:56.686462  301044 kubeadm.go:310] 
	I0729 13:42:56.686562  301044 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:42:56.686571  301044 kubeadm.go:310] 
	I0729 13:42:56.686687  301044 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686839  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 
	I0729 13:42:56.687046  301044 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:42:56.687123  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:42:56.687140  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:42:56.689013  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:42:53.324453  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.326475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:56.690282  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:42:56.703026  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:42:56.722677  301044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-972693 minikube.k8s.io/updated_at=2024_07_29T13_42_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=default-k8s-diff-port-972693 minikube.k8s.io/primary=true
	I0729 13:42:56.738921  301044 ops.go:34] apiserver oom_adj: -16
	I0729 13:42:56.902369  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.402842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.902902  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.403358  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.903112  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.402540  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.902605  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.402440  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.903011  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:01.403295  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.823966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:00.323772  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:01.818493  300705 pod_ready.go:81] duration metric: took 4m0.000972043s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:43:01.818528  300705 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:43:01.818537  300705 pod_ready.go:38] duration metric: took 4m4.037818748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:01.818555  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:01.818589  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:01.818643  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:01.874334  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:01.874359  300705 cri.go:89] found id: ""
	I0729 13:43:01.874369  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:01.874439  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.879122  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:01.879214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:01.919779  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:01.919804  300705 cri.go:89] found id: ""
	I0729 13:43:01.919814  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:01.919874  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.924895  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:01.924963  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:01.970365  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:01.970386  300705 cri.go:89] found id: ""
	I0729 13:43:01.970394  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:01.970444  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.975331  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:01.975409  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:02.013029  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.013062  300705 cri.go:89] found id: ""
	I0729 13:43:02.013074  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:02.013136  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.017958  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:02.018019  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:02.062357  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.062385  300705 cri.go:89] found id: ""
	I0729 13:43:02.062394  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:02.062463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.066791  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:02.066841  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:02.103790  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:02.103812  300705 cri.go:89] found id: ""
	I0729 13:43:02.103821  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:02.103882  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.108242  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:02.108293  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:02.151089  300705 cri.go:89] found id: ""
	I0729 13:43:02.151122  300705 logs.go:276] 0 containers: []
	W0729 13:43:02.151133  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:02.151141  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:02.151204  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:02.205700  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:02.205727  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.205732  300705 cri.go:89] found id: ""
	I0729 13:43:02.205741  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:02.205790  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.210332  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.214889  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:02.214913  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:02.229589  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:02.229621  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:02.278361  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:02.278394  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:02.319117  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:02.319146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.357874  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:02.357908  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.402114  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:02.402146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.442480  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:02.442514  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:01.903256  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.403400  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.902925  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.402616  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.903161  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.403255  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.902489  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.402506  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.902530  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:06.402436  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.953914  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:02.953961  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:03.013404  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:03.013441  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:03.151261  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:03.151294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:03.199910  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:03.199964  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:03.257103  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:03.257137  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:03.308519  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:03.308559  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:05.857929  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:05.878306  300705 api_server.go:72] duration metric: took 4m15.820258046s to wait for apiserver process to appear ...
	I0729 13:43:05.878338  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:05.878383  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:05.878451  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:05.924031  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:05.924071  300705 cri.go:89] found id: ""
	I0729 13:43:05.924083  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:05.924151  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.929284  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:05.929363  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:05.968980  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:05.969003  300705 cri.go:89] found id: ""
	I0729 13:43:05.969010  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:05.969056  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.973451  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:05.973516  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:06.011760  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.011784  300705 cri.go:89] found id: ""
	I0729 13:43:06.011794  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:06.011857  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.016065  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:06.016132  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:06.066319  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.066345  300705 cri.go:89] found id: ""
	I0729 13:43:06.066353  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:06.066420  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.071060  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:06.071120  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:06.117383  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.117405  300705 cri.go:89] found id: ""
	I0729 13:43:06.117413  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:06.117463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.121968  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:06.122053  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:06.156125  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.156151  300705 cri.go:89] found id: ""
	I0729 13:43:06.156160  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:06.156209  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.160301  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:06.160366  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:06.206751  300705 cri.go:89] found id: ""
	I0729 13:43:06.206780  300705 logs.go:276] 0 containers: []
	W0729 13:43:06.206790  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:06.206798  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:06.206860  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:06.248884  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.248918  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:06.248925  300705 cri.go:89] found id: ""
	I0729 13:43:06.248936  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:06.249006  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.253087  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.257229  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:06.257252  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.291495  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:06.291528  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.330190  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:06.330219  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.366500  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:06.366536  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.424871  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:06.424906  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:06.855025  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:06.855069  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:06.870025  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:06.870055  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:06.986590  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:06.986630  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:07.036972  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:07.037007  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:07.092602  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:07.092646  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:07.135326  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:07.135366  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:07.190208  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:07.190247  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:07.241865  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:07.241896  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.902842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.402861  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.903148  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.402619  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.902869  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.403349  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.903277  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.402468  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.535843  301044 kubeadm.go:1113] duration metric: took 13.813154738s to wait for elevateKubeSystemPrivileges
	I0729 13:43:10.535879  301044 kubeadm.go:394] duration metric: took 5m10.527995876s to StartCluster
	I0729 13:43:10.535899  301044 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.535991  301044 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:43:10.538845  301044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.539141  301044 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:43:10.539343  301044 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:43:10.539513  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:43:10.539528  301044 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539556  301044 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539574  301044 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-972693"
	I0729 13:43:10.539587  301044 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-972693"
	I0729 13:43:10.539600  301044 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539623  301044 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.539635  301044 addons.go:243] addon metrics-server should already be in state true
	I0729 13:43:10.539692  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	W0729 13:43:10.539594  301044 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:43:10.539817  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.540342  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540368  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540380  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540399  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540664  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540814  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.542249  301044 out.go:177] * Verifying Kubernetes components...
	I0729 13:43:10.543974  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:43:10.561555  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0729 13:43:10.561585  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I0729 13:43:10.561820  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0729 13:43:10.562096  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562160  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562579  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562694  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562711  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.562750  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562766  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563224  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563236  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563496  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.563516  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563793  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563923  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.563959  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563982  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.564526  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.564781  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.569041  301044 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.569062  301044 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:43:10.569091  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.569443  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.569462  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.580340  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I0729 13:43:10.580852  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.581371  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.581384  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.581724  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.581911  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.583937  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I0729 13:43:10.584108  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.584422  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.584864  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.584881  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.585262  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.585445  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.586285  301044 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:43:10.586973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.587855  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:43:10.587873  301044 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:43:10.587907  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.588885  301044 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:43:10.689091  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:43:10.689558  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:10.689837  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:10.590240  301044 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.590258  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:43:10.590275  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.592026  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42695
	I0729 13:43:10.592306  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.592778  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.592859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.592877  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.593162  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.593295  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.593382  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.593455  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.593663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594055  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.594082  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594233  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.594388  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.594485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.594621  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.594882  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.594892  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.595227  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.595663  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.595680  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.611094  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0729 13:43:10.611617  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.612200  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.612224  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.612600  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.612973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.614541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.614743  301044 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:10.614757  301044 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:43:10.614774  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.617611  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.618064  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.618416  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.618595  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.618754  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.791924  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:43:10.850744  301044 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866102  301044 node_ready.go:49] node "default-k8s-diff-port-972693" has status "Ready":"True"
	I0729 13:43:10.866137  301044 node_ready.go:38] duration metric: took 15.35404ms for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866171  301044 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:10.877661  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:10.958120  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.981335  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:43:10.981363  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:43:10.982804  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:11.145078  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:43:11.145108  301044 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:43:11.236628  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:11.236658  301044 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:43:11.308646  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.315025489s)
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290345752s)
	I0729 13:43:12.273254  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273270  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273283  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273296  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273572  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273589  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273598  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273606  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273704  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273721  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273731  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273739  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.275558  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275601  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275616  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.275624  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275634  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275644  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.309442  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.309473  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.309839  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.309888  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.309909  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.464546  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.155855113s)
	I0729 13:43:12.464601  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.464614  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465037  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465060  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465071  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.465081  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465398  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.465418  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465476  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465494  301044 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-972693"
	I0729 13:43:12.467315  301044 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:43:09.811571  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:43:09.817221  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:43:09.818319  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:09.818342  300705 api_server.go:131] duration metric: took 3.939996032s to wait for apiserver health ...
	I0729 13:43:09.818350  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:09.818373  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:09.818425  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:09.861856  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:09.861883  300705 cri.go:89] found id: ""
	I0729 13:43:09.861894  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:09.861962  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.867142  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:09.867216  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:09.909767  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:09.909795  300705 cri.go:89] found id: ""
	I0729 13:43:09.909808  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:09.909877  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.914410  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:09.914482  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:09.953540  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:09.953568  300705 cri.go:89] found id: ""
	I0729 13:43:09.953578  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:09.953637  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.958140  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:09.958214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:09.999809  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:09.999836  300705 cri.go:89] found id: ""
	I0729 13:43:09.999846  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:09.999911  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.004505  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:10.004587  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:10.049146  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.049173  300705 cri.go:89] found id: ""
	I0729 13:43:10.049182  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:10.049252  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.053631  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:10.053698  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:10.090361  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.090386  300705 cri.go:89] found id: ""
	I0729 13:43:10.090396  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:10.090442  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.095528  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:10.095588  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:10.131892  300705 cri.go:89] found id: ""
	I0729 13:43:10.131925  300705 logs.go:276] 0 containers: []
	W0729 13:43:10.131937  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:10.131944  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:10.132008  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:10.169101  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.169127  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.169133  300705 cri.go:89] found id: ""
	I0729 13:43:10.169142  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:10.169203  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.174716  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.179196  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:10.179217  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:10.222803  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:10.222833  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:10.265944  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:10.265975  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.310266  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:10.310294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.370562  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:10.370611  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.415759  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:10.415803  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:10.467672  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:10.467702  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:10.531249  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:10.531293  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:10.550454  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:10.550485  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:10.709028  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:10.709068  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:10.761048  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:10.761093  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:10.813125  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:10.813169  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.852581  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:10.852608  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:13.725236  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:43:13.725272  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.725279  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.725284  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.725289  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.725293  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.725298  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.725306  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.725312  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.725322  300705 system_pods.go:74] duration metric: took 3.906966083s to wait for pod list to return data ...
	I0729 13:43:13.725335  300705 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:13.727954  300705 default_sa.go:45] found service account: "default"
	I0729 13:43:13.727984  300705 default_sa.go:55] duration metric: took 2.638639ms for default service account to be created ...
	I0729 13:43:13.728032  300705 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:13.733141  300705 system_pods.go:86] 8 kube-system pods found
	I0729 13:43:13.733163  300705 system_pods.go:89] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.733169  300705 system_pods.go:89] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.733173  300705 system_pods.go:89] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.733177  300705 system_pods.go:89] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.733181  300705 system_pods.go:89] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.733185  300705 system_pods.go:89] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.733191  300705 system_pods.go:89] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.733196  300705 system_pods.go:89] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.733205  300705 system_pods.go:126] duration metric: took 5.16021ms to wait for k8s-apps to be running ...
	I0729 13:43:13.733213  300705 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:13.733255  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:13.755011  300705 system_svc.go:56] duration metric: took 21.784065ms WaitForService to wait for kubelet
	I0729 13:43:13.755042  300705 kubeadm.go:582] duration metric: took 4m23.697000108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:13.755068  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:13.758549  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:13.758572  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:13.758586  300705 node_conditions.go:105] duration metric: took 3.512205ms to run NodePressure ...
	I0729 13:43:13.758601  300705 start.go:241] waiting for startup goroutines ...
	I0729 13:43:13.758612  300705 start.go:246] waiting for cluster config update ...
	I0729 13:43:13.758625  300705 start.go:255] writing updated cluster config ...
	I0729 13:43:13.758945  300705 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:13.810333  300705 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:13.812397  300705 out.go:177] * Done! kubectl is now configured to use "embed-certs-135920" cluster and "default" namespace by default
	I0729 13:43:12.468541  301044 addons.go:510] duration metric: took 1.929219306s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:43:12.887280  301044 pod_ready.go:102] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:13.386255  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.386279  301044 pod_ready.go:81] duration metric: took 2.508586907s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.386291  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391278  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.391302  301044 pod_ready.go:81] duration metric: took 5.00403ms for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391313  301044 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396324  301044 pod_ready.go:92] pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.396343  301044 pod_ready.go:81] duration metric: took 5.022707ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396350  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403008  301044 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.403026  301044 pod_ready.go:81] duration metric: took 6.670677ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403035  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407836  301044 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.407856  301044 pod_ready.go:81] duration metric: took 4.814401ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407868  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783140  301044 pod_ready.go:92] pod "kube-proxy-tfsk9" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.783168  301044 pod_ready.go:81] duration metric: took 375.291599ms for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783181  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182560  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:14.182588  301044 pod_ready.go:81] duration metric: took 399.399691ms for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182597  301044 pod_ready.go:38] duration metric: took 3.316409576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:14.182610  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:14.182661  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:14.210715  301044 api_server.go:72] duration metric: took 3.671529553s to wait for apiserver process to appear ...
	I0729 13:43:14.210749  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:14.210790  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:43:14.214886  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:43:14.215773  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:14.215795  301044 api_server.go:131] duration metric: took 5.0389ms to wait for apiserver health ...
	I0729 13:43:14.215802  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:14.386356  301044 system_pods.go:59] 9 kube-system pods found
	I0729 13:43:14.386389  301044 system_pods.go:61] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.386394  301044 system_pods.go:61] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.386398  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.386401  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.386405  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.386409  301044 system_pods.go:61] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.386412  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.386417  301044 system_pods.go:61] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.386420  301044 system_pods.go:61] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.386430  301044 system_pods.go:74] duration metric: took 170.622271ms to wait for pod list to return data ...
	I0729 13:43:14.386437  301044 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:14.582618  301044 default_sa.go:45] found service account: "default"
	I0729 13:43:14.582643  301044 default_sa.go:55] duration metric: took 196.19918ms for default service account to be created ...
	I0729 13:43:14.582652  301044 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:14.785669  301044 system_pods.go:86] 9 kube-system pods found
	I0729 13:43:14.785701  301044 system_pods.go:89] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.785707  301044 system_pods.go:89] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.785711  301044 system_pods.go:89] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.785719  301044 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.785723  301044 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.785727  301044 system_pods.go:89] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.785731  301044 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.785737  301044 system_pods.go:89] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.785741  301044 system_pods.go:89] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.785750  301044 system_pods.go:126] duration metric: took 203.092668ms to wait for k8s-apps to be running ...
	I0729 13:43:14.785756  301044 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:14.785801  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:14.802927  301044 system_svc.go:56] duration metric: took 17.160927ms WaitForService to wait for kubelet
	I0729 13:43:14.802957  301044 kubeadm.go:582] duration metric: took 4.263780375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:14.802977  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:14.983106  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:14.983135  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:14.983146  301044 node_conditions.go:105] duration metric: took 180.164781ms to run NodePressure ...
	I0729 13:43:14.983159  301044 start.go:241] waiting for startup goroutines ...
	I0729 13:43:14.983165  301044 start.go:246] waiting for cluster config update ...
	I0729 13:43:14.983175  301044 start.go:255] writing updated cluster config ...
	I0729 13:43:14.983443  301044 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:15.038438  301044 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:15.040318  301044 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-972693" cluster and "default" namespace by default
	I0729 13:43:15.690809  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:15.691011  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:25.691962  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:25.692244  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:45.693269  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:45.693473  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696107  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:44:25.696300  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696307  301425 kubeadm.go:310] 
	I0729 13:44:25.696341  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:44:25.696400  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:44:25.696419  301425 kubeadm.go:310] 
	I0729 13:44:25.696463  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:44:25.696510  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:44:25.696653  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:44:25.696674  301425 kubeadm.go:310] 
	I0729 13:44:25.696818  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:44:25.696868  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:44:25.696921  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:44:25.696930  301425 kubeadm.go:310] 
	I0729 13:44:25.697076  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:44:25.697192  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:44:25.697206  301425 kubeadm.go:310] 
	I0729 13:44:25.697349  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:44:25.697459  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:44:25.697568  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:44:25.697669  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:44:25.697680  301425 kubeadm.go:310] 
	I0729 13:44:25.698359  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:44:25.698490  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:44:25.698596  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 13:44:25.698771  301425 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 13:44:25.698848  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:44:26.160539  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:44:26.175482  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:44:26.185562  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:44:26.185593  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:44:26.185657  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:44:26.195781  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:44:26.195865  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:44:26.207404  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:44:26.217068  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:44:26.217188  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:44:26.226075  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.234622  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:44:26.234684  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.243756  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:44:26.252630  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:44:26.252695  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:44:26.262846  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:44:26.340215  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:44:26.340318  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:44:26.496049  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:44:26.496199  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:44:26.496327  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:44:26.678135  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:44:26.680089  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:44:26.680173  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:44:26.680257  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:44:26.680378  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:44:26.680470  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:44:26.680570  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:44:26.680653  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:44:26.680751  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:44:26.681022  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:44:26.681519  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:44:26.681876  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:44:26.681994  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:44:26.682083  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:44:26.762680  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:44:26.922517  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:44:26.973731  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:44:27.193064  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:44:27.216477  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:44:27.219036  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:44:27.219293  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:44:27.386424  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:44:27.388194  301425 out.go:204]   - Booting up control plane ...
	I0729 13:44:27.388340  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:44:27.390345  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:44:27.391455  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:44:27.392303  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:44:27.394301  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:45:07.396989  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:45:07.397449  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:07.397719  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:12.397982  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:12.398297  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:22.398751  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:22.399010  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:42.399462  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:42.399675  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398413  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:46:22.398684  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398700  301425 kubeadm.go:310] 
	I0729 13:46:22.398763  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:46:22.398844  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:46:22.398886  301425 kubeadm.go:310] 
	I0729 13:46:22.398948  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:46:22.399002  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:46:22.399132  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:46:22.399145  301425 kubeadm.go:310] 
	I0729 13:46:22.399287  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:46:22.399346  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:46:22.399392  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:46:22.399404  301425 kubeadm.go:310] 
	I0729 13:46:22.399530  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:46:22.399610  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:46:22.399617  301425 kubeadm.go:310] 
	I0729 13:46:22.399735  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:46:22.399844  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:46:22.399943  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:46:22.400021  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:46:22.400035  301425 kubeadm.go:310] 
	I0729 13:46:22.400291  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:46:22.400370  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:46:22.400440  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 13:46:22.400520  301425 kubeadm.go:394] duration metric: took 7m57.286753846s to StartCluster
	I0729 13:46:22.400612  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:46:22.400692  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:46:22.446188  301425 cri.go:89] found id: ""
	I0729 13:46:22.446216  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.446225  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:46:22.446232  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:46:22.446289  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:46:22.484089  301425 cri.go:89] found id: ""
	I0729 13:46:22.484118  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.484128  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:46:22.484135  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:46:22.484197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:46:22.526817  301425 cri.go:89] found id: ""
	I0729 13:46:22.526846  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.526854  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:46:22.526860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:46:22.526912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:46:22.564787  301425 cri.go:89] found id: ""
	I0729 13:46:22.564834  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.564846  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:46:22.564854  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:46:22.564920  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:46:22.601843  301425 cri.go:89] found id: ""
	I0729 13:46:22.601881  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.601892  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:46:22.601900  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:46:22.601980  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:46:22.637420  301425 cri.go:89] found id: ""
	I0729 13:46:22.637448  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.637455  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:46:22.637462  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:46:22.637519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:46:22.672427  301425 cri.go:89] found id: ""
	I0729 13:46:22.672465  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.672476  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:46:22.672485  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:46:22.672549  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:46:22.708256  301425 cri.go:89] found id: ""
	I0729 13:46:22.708285  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.708294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:46:22.708306  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:46:22.708323  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:46:22.819287  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:46:22.819327  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:46:22.859298  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:46:22.859339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:46:22.914290  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:46:22.914342  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:46:22.936919  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:46:22.936951  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:46:23.035889  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0729 13:46:23.035939  301425 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 13:46:23.035991  301425 out.go:239] * 
	W0729 13:46:23.036103  301425 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.036137  301425 out.go:239] * 
	W0729 13:46:23.037370  301425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:46:23.040573  301425 out.go:177] 
	W0729 13:46:23.042130  301425 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.042173  301425 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 13:46:23.042193  301425 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 13:46:23.043539  301425 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.502564724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261091502540472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25b347f0-2fd7-463a-9711-e731c1db152e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.503028042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f9c3738-0905-4fbc-b280-08d03ed42d7b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.503080108Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f9c3738-0905-4fbc-b280-08d03ed42d7b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.503348508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260295916592638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25a1cef4fe62b959b63a0ea5ae0be4eed4725e01da6be6ed9dacb7746f95f58,PodSandboxId:e9a8ce40643081f6e59f9a61f7aff033a9be3f94aa76cd845223a2caa6fc48e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260289870153481,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 913d9c33-01b3-4966-bbfb-61a75f958c12,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e,PodSandboxId:7832ee370975d85e084f122eea8217b63855127b6b081fd616a2248e0ffae0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260286979051346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kkrqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1ab6ca-6006-450e-8bef-bf9136e5e575,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260280024593988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2,PodSandboxId:2deb18f3e0f2366e352621fe59598d9ba5d5a97c7fac5f61fe72c2220ce315a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722260279354446796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ql6wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ee6e47-c0f9-4c98-b294-3ee39b6278
84,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa,PodSandboxId:54a01ed813dbdb8b134b3e3b1ee549d6372ec3a9c7a3bae4bb92b7fa2ab228cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722260274694984594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb64324503455e84
4b1a6d605201625d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2,PodSandboxId:fafa613b78cb7bcf60fc41bf5938cb6e9a88e60b8eed1e4826aedb7a5c200694,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722260274618799766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c20f959dbbac974f49ab921fe8fe8
ecd,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e,PodSandboxId:73d98712f2ebca8b45b709f842cfb3d7c8ab64632387b153d245eef7d58c0e57,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722260274601969104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ba46991e39bfca6afa3f59eb02c317,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6,PodSandboxId:5f8674c0bd92cf295d8e1f6115e51d9e5fe7e4e961b82dbda1b957846c75ac68,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722260274564841493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c7239a3fdc31ee696d9e70cf015f9c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f9c3738-0905-4fbc-b280-08d03ed42d7b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.549345178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fffa3c1-4dfe-4a23-9b56-bce1d9393af9 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.549504072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fffa3c1-4dfe-4a23-9b56-bce1d9393af9 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.550808661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7f791b8-6384-4331-bb73-b377523d3ad6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.551170940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261091551139992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7f791b8-6384-4331-bb73-b377523d3ad6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.551878014Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7b19ae7-80d2-432c-84ba-c0e2057b628b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.551931125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7b19ae7-80d2-432c-84ba-c0e2057b628b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.552142059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260295916592638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25a1cef4fe62b959b63a0ea5ae0be4eed4725e01da6be6ed9dacb7746f95f58,PodSandboxId:e9a8ce40643081f6e59f9a61f7aff033a9be3f94aa76cd845223a2caa6fc48e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260289870153481,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 913d9c33-01b3-4966-bbfb-61a75f958c12,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e,PodSandboxId:7832ee370975d85e084f122eea8217b63855127b6b081fd616a2248e0ffae0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260286979051346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kkrqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1ab6ca-6006-450e-8bef-bf9136e5e575,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260280024593988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2,PodSandboxId:2deb18f3e0f2366e352621fe59598d9ba5d5a97c7fac5f61fe72c2220ce315a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722260279354446796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ql6wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ee6e47-c0f9-4c98-b294-3ee39b6278
84,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa,PodSandboxId:54a01ed813dbdb8b134b3e3b1ee549d6372ec3a9c7a3bae4bb92b7fa2ab228cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722260274694984594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb64324503455e84
4b1a6d605201625d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2,PodSandboxId:fafa613b78cb7bcf60fc41bf5938cb6e9a88e60b8eed1e4826aedb7a5c200694,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722260274618799766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c20f959dbbac974f49ab921fe8fe8
ecd,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e,PodSandboxId:73d98712f2ebca8b45b709f842cfb3d7c8ab64632387b153d245eef7d58c0e57,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722260274601969104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ba46991e39bfca6afa3f59eb02c317,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6,PodSandboxId:5f8674c0bd92cf295d8e1f6115e51d9e5fe7e4e961b82dbda1b957846c75ac68,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722260274564841493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c7239a3fdc31ee696d9e70cf015f9c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7b19ae7-80d2-432c-84ba-c0e2057b628b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.590219251Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e96fe7da-48c5-4d42-b03c-0f2c02195e35 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.590289278Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e96fe7da-48c5-4d42-b03c-0f2c02195e35 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.591624538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2205267d-21e2-4e2a-b16e-49a39e1d6a9a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.591953025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261091591930381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2205267d-21e2-4e2a-b16e-49a39e1d6a9a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.592493402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee0ce76a-58ba-461c-b7e6-81b8eef4e104 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.592545923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee0ce76a-58ba-461c-b7e6-81b8eef4e104 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.592750358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260295916592638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25a1cef4fe62b959b63a0ea5ae0be4eed4725e01da6be6ed9dacb7746f95f58,PodSandboxId:e9a8ce40643081f6e59f9a61f7aff033a9be3f94aa76cd845223a2caa6fc48e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260289870153481,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 913d9c33-01b3-4966-bbfb-61a75f958c12,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e,PodSandboxId:7832ee370975d85e084f122eea8217b63855127b6b081fd616a2248e0ffae0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260286979051346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kkrqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1ab6ca-6006-450e-8bef-bf9136e5e575,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260280024593988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2,PodSandboxId:2deb18f3e0f2366e352621fe59598d9ba5d5a97c7fac5f61fe72c2220ce315a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722260279354446796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ql6wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ee6e47-c0f9-4c98-b294-3ee39b6278
84,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa,PodSandboxId:54a01ed813dbdb8b134b3e3b1ee549d6372ec3a9c7a3bae4bb92b7fa2ab228cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722260274694984594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb64324503455e84
4b1a6d605201625d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2,PodSandboxId:fafa613b78cb7bcf60fc41bf5938cb6e9a88e60b8eed1e4826aedb7a5c200694,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722260274618799766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c20f959dbbac974f49ab921fe8fe8
ecd,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e,PodSandboxId:73d98712f2ebca8b45b709f842cfb3d7c8ab64632387b153d245eef7d58c0e57,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722260274601969104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ba46991e39bfca6afa3f59eb02c317,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6,PodSandboxId:5f8674c0bd92cf295d8e1f6115e51d9e5fe7e4e961b82dbda1b957846c75ac68,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722260274564841493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c7239a3fdc31ee696d9e70cf015f9c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee0ce76a-58ba-461c-b7e6-81b8eef4e104 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.629013709Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a62dcff-ec82-4ef3-a038-c758c19896b4 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.629102001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a62dcff-ec82-4ef3-a038-c758c19896b4 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.630261877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b06626bc-5b95-4fd7-a7fa-d8baaf32682d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.630849696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261091630823266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b06626bc-5b95-4fd7-a7fa-d8baaf32682d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.631653265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ce97256-6d0e-453e-8e07-21fbb98f8c85 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.631726355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ce97256-6d0e-453e-8e07-21fbb98f8c85 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:51:31 no-preload-566777 crio[707]: time="2024-07-29 13:51:31.631932190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260295916592638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25a1cef4fe62b959b63a0ea5ae0be4eed4725e01da6be6ed9dacb7746f95f58,PodSandboxId:e9a8ce40643081f6e59f9a61f7aff033a9be3f94aa76cd845223a2caa6fc48e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260289870153481,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 913d9c33-01b3-4966-bbfb-61a75f958c12,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e,PodSandboxId:7832ee370975d85e084f122eea8217b63855127b6b081fd616a2248e0ffae0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260286979051346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kkrqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1ab6ca-6006-450e-8bef-bf9136e5e575,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260280024593988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2,PodSandboxId:2deb18f3e0f2366e352621fe59598d9ba5d5a97c7fac5f61fe72c2220ce315a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722260279354446796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ql6wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ee6e47-c0f9-4c98-b294-3ee39b6278
84,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa,PodSandboxId:54a01ed813dbdb8b134b3e3b1ee549d6372ec3a9c7a3bae4bb92b7fa2ab228cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722260274694984594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb64324503455e84
4b1a6d605201625d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2,PodSandboxId:fafa613b78cb7bcf60fc41bf5938cb6e9a88e60b8eed1e4826aedb7a5c200694,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722260274618799766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c20f959dbbac974f49ab921fe8fe8
ecd,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e,PodSandboxId:73d98712f2ebca8b45b709f842cfb3d7c8ab64632387b153d245eef7d58c0e57,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722260274601969104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ba46991e39bfca6afa3f59eb02c317,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6,PodSandboxId:5f8674c0bd92cf295d8e1f6115e51d9e5fe7e4e961b82dbda1b957846c75ac68,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722260274564841493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c7239a3fdc31ee696d9e70cf015f9c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ce97256-6d0e-453e-8e07-21fbb98f8c85 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5dcd5030f62fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       4                   e4dc76a0df61a       storage-provisioner
	a25a1cef4fe62       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   e9a8ce4064308       busybox
	5889da7fe3143       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   7832ee370975d       coredns-5cfdc65f69-kkrqd
	09fdadca1aa7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       3                   e4dc76a0df61a       storage-provisioner
	a2ed90bc70759       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      13 minutes ago      Running             kube-proxy                1                   2deb18f3e0f23       kube-proxy-ql6wf
	5c91d66f36628       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      13 minutes ago      Running             kube-controller-manager   1                   54a01ed813dbd       kube-controller-manager-no-preload-566777
	f08ba8d78f505       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      13 minutes ago      Running             kube-apiserver            1                   fafa613b78cb7       kube-apiserver-no-preload-566777
	6d236da3b529e       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      13 minutes ago      Running             kube-scheduler            1                   73d98712f2ebc       kube-scheduler-no-preload-566777
	f784cabd7fc33       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      13 minutes ago      Running             etcd                      1                   5f8674c0bd92c       etcd-no-preload-566777
	
	
	==> coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43951 - 28014 "HINFO IN 7181564784847732016.5453138748017200787. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009842007s
	
	
	==> describe nodes <==
	Name:               no-preload-566777
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-566777
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=no-preload-566777
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_29_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-566777
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:51:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:48:42 +0000   Mon, 29 Jul 2024 13:29:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:48:42 +0000   Mon, 29 Jul 2024 13:29:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:48:42 +0000   Mon, 29 Jul 2024 13:29:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:48:42 +0000   Mon, 29 Jul 2024 13:38:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.84
	  Hostname:    no-preload-566777
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f2be5108ba24204911e831586431a5d
	  System UUID:                7f2be510-8ba2-4204-911e-831586431a5d
	  Boot ID:                    1d18b67e-906a-4f97-b0b1-1bb083aa856d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5cfdc65f69-kkrqd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-no-preload-566777                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-no-preload-566777             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-no-preload-566777    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-ql6wf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-no-preload-566777             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-78fcd8795b-dv8pr              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-566777 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-566777 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-566777 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node no-preload-566777 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-566777 event: Registered Node no-preload-566777 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-566777 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-566777 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-566777 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-566777 event: Registered Node no-preload-566777 in Controller
	
	
	==> dmesg <==
	[Jul29 13:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049799] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040707] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.748284] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.381971] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.578120] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.118438] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.066924] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057800] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.157541] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.126597] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.295899] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[ +15.081918] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +0.058288] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.470739] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +2.932394] kauditd_printk_skb: 97 callbacks suppressed
	[Jul29 13:38] kauditd_printk_skb: 42 callbacks suppressed
	[  +1.663493] systemd-fstab-generator[1969]: Ignoring "noauto" option for root device
	[  +4.370181] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] <==
	{"level":"info","ts":"2024-07-29T13:38:02.01061Z","caller":"traceutil/trace.go:171","msg":"trace[529766298] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"433.873565ms","start":"2024-07-29T13:38:01.576714Z","end":"2024-07-29T13:38:02.010588Z","steps":["trace[529766298] 'process raft request'  (duration: 127.163977ms)","trace[529766298] 'compare'  (duration: 302.892486ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T13:38:02.010714Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:38:01.5767Z","time spent":"433.972057ms","remote":"127.0.0.1:38016","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":559,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/no-preload-566777.17e6b2999e548b4e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/no-preload-566777.17e6b2999e548b4e\" value_size:482 lease:7744661932870693761 >> failure:<>"}
	{"level":"info","ts":"2024-07-29T13:38:02.016053Z","caller":"traceutil/trace.go:171","msg":"trace[1629862690] linearizableReadLoop","detail":"{readStateIndex:538; appliedIndex:536; }","duration":"429.374905ms","start":"2024-07-29T13:38:01.586662Z","end":"2024-07-29T13:38:02.016037Z","steps":["trace[1629862690] 'read index received'  (duration: 117.058438ms)","trace[1629862690] 'applied index is now lower than readState.Index'  (duration: 312.315438ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T13:38:02.016166Z","caller":"traceutil/trace.go:171","msg":"trace[1764473577] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"439.227134ms","start":"2024-07-29T13:38:01.57693Z","end":"2024-07-29T13:38:02.016157Z","steps":["trace[1764473577] 'process raft request'  (duration: 438.991356ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:02.016244Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:38:01.57692Z","time spent":"439.269269ms","remote":"127.0.0.1:37534","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":749,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.17e6b299720d2536\" mod_revision:509 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.17e6b299720d2536\" value_size:666 lease:7744661932870693604 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.17e6b299720d2536\" > >"}
	{"level":"warn","ts":"2024-07-29T13:38:02.016575Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"429.903078ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-07-29T13:38:02.01661Z","caller":"traceutil/trace.go:171","msg":"trace[128830485] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:515; }","duration":"429.943218ms","start":"2024-07-29T13:38:01.586658Z","end":"2024-07-29T13:38:02.016602Z","steps":["trace[128830485] 'agreement among raft nodes before linearized reading'  (duration: 429.795017ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:02.016636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:38:01.586629Z","time spent":"430.001576ms","remote":"127.0.0.1:37672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":233,"request content":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" "}
	{"level":"warn","ts":"2024-07-29T13:38:02.016827Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"430.130031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:4008"}
	{"level":"info","ts":"2024-07-29T13:38:02.016849Z","caller":"traceutil/trace.go:171","msg":"trace[917067676] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:515; }","duration":"430.152914ms","start":"2024-07-29T13:38:01.586689Z","end":"2024-07-29T13:38:02.016842Z","steps":["trace[917067676] 'agreement among raft nodes before linearized reading'  (duration: 430.064169ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:02.016872Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:38:01.58668Z","time spent":"430.184966ms","remote":"127.0.0.1:37652","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":4032,"request content":"key:\"/registry/pods/kube-system/storage-provisioner\" "}
	{"level":"warn","ts":"2024-07-29T13:38:02.017542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"419.647574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T13:38:02.017581Z","caller":"traceutil/trace.go:171","msg":"trace[2023227524] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:515; }","duration":"419.68803ms","start":"2024-07-29T13:38:01.597879Z","end":"2024-07-29T13:38:02.017567Z","steps":["trace[2023227524] 'agreement among raft nodes before linearized reading'  (duration: 419.163268ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:02.017611Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:38:01.597851Z","time spent":"419.753786ms","remote":"127.0.0.1:37638","response type":"/etcdserverpb.KV/Range","request count":0,"request size":21,"response count":0,"response size":29,"request content":"key:\"/registry/minions\" limit:1 "}
	{"level":"warn","ts":"2024-07-29T13:38:02.017742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"428.809642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-07-29T13:38:02.017766Z","caller":"traceutil/trace.go:171","msg":"trace[288506604] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:515; }","duration":"428.832782ms","start":"2024-07-29T13:38:01.588927Z","end":"2024-07-29T13:38:02.01776Z","steps":["trace[288506604] 'agreement among raft nodes before linearized reading'  (duration: 428.787488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:02.017785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:38:01.588894Z","time spent":"428.886369ms","remote":"127.0.0.1:37672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":1,"response size":240,"request content":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" "}
	{"level":"warn","ts":"2024-07-29T13:38:25.260032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.044462ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7744661932870694189 > lease_revoke:<id:6b7a90feb63f748c>","response":"size:29"}
	{"level":"warn","ts":"2024-07-29T13:38:44.060332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.629894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-78fcd8795b-dv8pr\" ","response":"range_response_count:1 size:4383"}
	{"level":"info","ts":"2024-07-29T13:38:44.060488Z","caller":"traceutil/trace.go:171","msg":"trace[757519653] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-78fcd8795b-dv8pr; range_end:; response_count:1; response_revision:611; }","duration":"246.800895ms","start":"2024-07-29T13:38:43.813672Z","end":"2024-07-29T13:38:44.060473Z","steps":["trace[757519653] 'range keys from in-memory index tree'  (duration: 246.450546ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:44.06062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.817935ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T13:38:44.060655Z","caller":"traceutil/trace.go:171","msg":"trace[549234005] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:611; }","duration":"349.85894ms","start":"2024-07-29T13:38:43.710787Z","end":"2024-07-29T13:38:44.060646Z","steps":["trace[549234005] 'range keys from in-memory index tree'  (duration: 349.80977ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:47:56.393125Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":832}
	{"level":"info","ts":"2024-07-29T13:47:56.405695Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":832,"took":"10.749065ms","hash":1731901634,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2801664,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-07-29T13:47:56.406487Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1731901634,"revision":832,"compact-revision":-1}
	
	
	==> kernel <==
	 13:51:31 up 14 min,  0 users,  load average: 0.07, 0.19, 0.12
	Linux no-preload-566777 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 13:47:59.082337       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 13:47:59.082446       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 13:47:59.083433       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 13:47:59.083507       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:48:59.084201       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 13:48:59.084283       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0729 13:48:59.084331       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 13:48:59.084509       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 13:48:59.085580       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 13:48:59.085644       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:50:59.086739       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 13:50:59.087031       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0729 13:50:59.087105       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 13:50:59.087205       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 13:50:59.088320       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 13:50:59.088467       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] <==
	E0729 13:46:04.459638       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:46:04.547887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:46:34.465079       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:46:34.556268       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:47:04.472621       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:47:04.563770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:47:34.479545       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:47:34.571875       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:48:04.486104       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:48:04.579338       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:48:34.492526       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:48:34.588795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 13:48:42.264604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-566777"
	I0729 13:48:52.923650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="378.43µs"
	E0729 13:49:04.498652       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:49:04.597755       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 13:49:04.920279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="89.825µs"
	E0729 13:49:34.505693       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:49:34.606911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:50:04.512758       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:50:04.616788       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:50:34.519993       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:50:34.624211       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:51:04.528536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:51:04.633575       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 13:37:59.915978       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 13:38:00.316057       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.84"]
	E0729 13:38:00.316465       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 13:38:00.360268       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 13:38:00.360463       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:38:00.360593       1 server_linux.go:170] "Using iptables Proxier"
	I0729 13:38:00.364051       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 13:38:00.364869       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 13:38:00.365032       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:38:00.370426       1 config.go:197] "Starting service config controller"
	I0729 13:38:00.370503       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:38:00.370735       1 config.go:104] "Starting endpoint slice config controller"
	I0729 13:38:00.370773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:38:00.374000       1 config.go:326] "Starting node config controller"
	I0729 13:38:00.374530       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:38:00.471107       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:38:00.471240       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:38:00.475517       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] <==
	I0729 13:37:55.520050       1 serving.go:386] Generated self-signed cert in-memory
	W0729 13:37:58.019105       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 13:37:58.019225       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 13:37:58.019268       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 13:37:58.019298       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 13:37:58.105624       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 13:37:58.105683       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:37:58.114496       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 13:37:58.114607       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 13:37:58.117636       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 13:37:58.117734       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 13:37:58.215051       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:48:53 no-preload-566777 kubelet[1282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:48:53 no-preload-566777 kubelet[1282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:48:53 no-preload-566777 kubelet[1282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:49:04 no-preload-566777 kubelet[1282]: E0729 13:49:04.905197    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:49:16 no-preload-566777 kubelet[1282]: E0729 13:49:16.906151    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:49:31 no-preload-566777 kubelet[1282]: E0729 13:49:31.906938    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:49:44 no-preload-566777 kubelet[1282]: E0729 13:49:44.905519    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:49:53 no-preload-566777 kubelet[1282]: E0729 13:49:53.939046    1282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:49:53 no-preload-566777 kubelet[1282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:49:53 no-preload-566777 kubelet[1282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:49:53 no-preload-566777 kubelet[1282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:49:53 no-preload-566777 kubelet[1282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:49:56 no-preload-566777 kubelet[1282]: E0729 13:49:56.905115    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:50:07 no-preload-566777 kubelet[1282]: E0729 13:50:07.905569    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:50:20 no-preload-566777 kubelet[1282]: E0729 13:50:20.907829    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:50:35 no-preload-566777 kubelet[1282]: E0729 13:50:35.906252    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:50:47 no-preload-566777 kubelet[1282]: E0729 13:50:47.906004    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:50:53 no-preload-566777 kubelet[1282]: E0729 13:50:53.935647    1282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:50:53 no-preload-566777 kubelet[1282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:50:53 no-preload-566777 kubelet[1282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:50:53 no-preload-566777 kubelet[1282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:50:53 no-preload-566777 kubelet[1282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:51:01 no-preload-566777 kubelet[1282]: E0729 13:51:01.905309    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:51:13 no-preload-566777 kubelet[1282]: E0729 13:51:13.907866    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:51:25 no-preload-566777 kubelet[1282]: E0729 13:51:25.906274    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	
	
	==> storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] <==
	I0729 13:38:00.543821       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 13:38:00.546637       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] <==
	I0729 13:38:16.016716       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 13:38:16.029786       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 13:38:16.030900       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 13:38:33.435978       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 13:38:33.436507       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"415690b2-bf97-40fe-a529-12c868c1546e", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-566777_4ff1a963-eaff-4771-9f52-7083647aaf80 became leader
	I0729 13:38:33.436834       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-566777_4ff1a963-eaff-4771-9f52-7083647aaf80!
	I0729 13:38:33.537275       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-566777_4ff1a963-eaff-4771-9f52-7083647aaf80!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-566777 -n no-preload-566777
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-566777 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-dv8pr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-566777 describe pod metrics-server-78fcd8795b-dv8pr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-566777 describe pod metrics-server-78fcd8795b-dv8pr: exit status 1 (65.774722ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-dv8pr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-566777 describe pod metrics-server-78fcd8795b-dv8pr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-135920 -n embed-certs-135920
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 13:52:14.355392568 +0000 UTC m=+6612.618747376
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-135920 -n embed-certs-135920
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-135920 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-135920 logs -n 25: (2.366928287s)
E0729 13:52:17.402820  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-507612 sudo cat                              | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo find                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo crio                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-507612                                       | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-312895 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | disable-driver-mounts-312895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:30 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-135920            | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-566777             | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-972693  | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-135920                 | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-566777                  | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-924039        | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-566777 --memory=2200                     | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-972693       | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC | 29 Jul 24 13:43 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-924039             | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:34:10
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:34:10.969228  301425 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:34:10.969348  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969356  301425 out.go:304] Setting ErrFile to fd 2...
	I0729 13:34:10.969360  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969506  301425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:34:10.970007  301425 out.go:298] Setting JSON to false
	I0729 13:34:10.970908  301425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11794,"bootTime":1722248257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:34:10.970971  301425 start.go:139] virtualization: kvm guest
	I0729 13:34:10.973245  301425 out.go:177] * [old-k8s-version-924039] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:34:10.974804  301425 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:34:10.974803  301425 notify.go:220] Checking for updates...
	I0729 13:34:10.977011  301425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:34:10.978270  301425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:34:10.979473  301425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:34:10.980743  301425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:34:10.981923  301425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:34:10.983514  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:34:10.983962  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:10.984049  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:10.998985  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0729 13:34:10.999407  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:10.999928  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:10.999951  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.000306  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.000497  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.002455  301425 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 13:34:11.003702  301425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:34:11.003997  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:11.004037  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:11.018707  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I0729 13:34:11.019177  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:11.019653  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:11.019676  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.019968  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.020126  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.055819  301425 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:34:11.057085  301425 start.go:297] selected driver: kvm2
	I0729 13:34:11.057104  301425 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.057242  301425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:34:11.057967  301425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.058029  301425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:34:11.073706  301425 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:34:11.074089  301425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:34:11.074169  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:34:11.074188  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:34:11.074240  301425 start.go:340] cluster config:
	{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.074366  301425 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.076296  301425 out.go:177] * Starting "old-k8s-version-924039" primary control-plane node in "old-k8s-version-924039" cluster
	I0729 13:34:09.149068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:11.077828  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:34:11.077869  301425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:34:11.077879  301425 cache.go:56] Caching tarball of preloaded images
	I0729 13:34:11.077959  301425 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:34:11.077970  301425 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 13:34:11.078069  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:34:11.078241  301425 start.go:360] acquireMachinesLock for old-k8s-version-924039: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:34:15.229067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:18.301058  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:24.381104  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:27.453064  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:33.533067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:36.605120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:42.685075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:45.757111  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:51.837033  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:54.909068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:00.989073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:04.061125  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:10.141082  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:13.213123  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:19.293109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:22.365061  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:28.445075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:31.517094  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:37.597080  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:40.669073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:46.749070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:49.821083  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:55.901013  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:58.973149  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:05.053098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:08.125109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:14.205093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:17.277093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:23.357105  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:26.429122  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:32.509070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:35.581107  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:41.661120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:44.733129  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:50.813085  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:53.885117  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:59.965073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:03.037079  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:09.117098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:12.189049  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:15.193505  300746 start.go:364] duration metric: took 4m36.683808785s to acquireMachinesLock for "no-preload-566777"
	I0729 13:37:15.193569  300746 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:15.193577  300746 fix.go:54] fixHost starting: 
	I0729 13:37:15.193937  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:15.193976  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:15.209623  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0729 13:37:15.210158  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:15.210625  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:37:15.210646  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:15.211001  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:15.211265  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:15.211468  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:37:15.213144  300746 fix.go:112] recreateIfNeeded on no-preload-566777: state=Stopped err=<nil>
	I0729 13:37:15.213185  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	W0729 13:37:15.213349  300746 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:15.215474  300746 out.go:177] * Restarting existing kvm2 VM for "no-preload-566777" ...
	I0729 13:37:15.190804  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:15.190850  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191224  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:37:15.191257  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191494  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:37:15.193354  300705 machine.go:97] duration metric: took 4m37.425774293s to provisionDockerMachine
	I0729 13:37:15.193407  300705 fix.go:56] duration metric: took 4m37.447841932s for fixHost
	I0729 13:37:15.193419  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 4m37.447869212s
	W0729 13:37:15.193447  300705 start.go:714] error starting host: provision: host is not running
	W0729 13:37:15.193569  300705 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 13:37:15.193581  300705 start.go:729] Will try again in 5 seconds ...
	I0729 13:37:15.216957  300746 main.go:141] libmachine: (no-preload-566777) Calling .Start
	I0729 13:37:15.217120  300746 main.go:141] libmachine: (no-preload-566777) Ensuring networks are active...
	I0729 13:37:15.217761  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network default is active
	I0729 13:37:15.218067  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network mk-no-preload-566777 is active
	I0729 13:37:15.218451  300746 main.go:141] libmachine: (no-preload-566777) Getting domain xml...
	I0729 13:37:15.219134  300746 main.go:141] libmachine: (no-preload-566777) Creating domain...
	I0729 13:37:16.412301  300746 main.go:141] libmachine: (no-preload-566777) Waiting to get IP...
	I0729 13:37:16.413162  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.413576  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.413670  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.413557  302040 retry.go:31] will retry after 233.512145ms: waiting for machine to come up
	I0729 13:37:16.649335  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.649921  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.649945  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.649885  302040 retry.go:31] will retry after 328.846738ms: waiting for machine to come up
	I0729 13:37:16.980566  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.980976  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.981022  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.980926  302040 retry.go:31] will retry after 329.69915ms: waiting for machine to come up
	I0729 13:37:17.312547  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.312948  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.312977  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.312906  302040 retry.go:31] will retry after 418.810733ms: waiting for machine to come up
	I0729 13:37:17.733615  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.734042  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.734065  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.734009  302040 retry.go:31] will retry after 694.191211ms: waiting for machine to come up
	I0729 13:37:20.196079  300705 start.go:360] acquireMachinesLock for embed-certs-135920: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:37:18.429670  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:18.430024  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:18.430055  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:18.429973  302040 retry.go:31] will retry after 857.66396ms: waiting for machine to come up
	I0729 13:37:19.289078  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:19.289491  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:19.289521  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:19.289458  302040 retry.go:31] will retry after 994.340261ms: waiting for machine to come up
	I0729 13:37:20.285875  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:20.286308  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:20.286340  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:20.286263  302040 retry.go:31] will retry after 1.052380852s: waiting for machine to come up
	I0729 13:37:21.340435  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:21.340775  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:21.340821  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:21.340743  302040 retry.go:31] will retry after 1.429700498s: waiting for machine to come up
	I0729 13:37:22.772362  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:22.772754  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:22.772782  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:22.772700  302040 retry.go:31] will retry after 1.702185495s: waiting for machine to come up
	I0729 13:37:24.477636  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:24.478074  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:24.478106  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:24.478003  302040 retry.go:31] will retry after 2.649912402s: waiting for machine to come up
	I0729 13:37:27.129797  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:27.130212  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:27.130243  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:27.130159  302040 retry.go:31] will retry after 3.079887428s: waiting for machine to come up
	I0729 13:37:30.213431  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:30.213918  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:30.213958  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:30.213875  302040 retry.go:31] will retry after 3.08003223s: waiting for machine to come up
	I0729 13:37:33.297139  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.297604  300746 main.go:141] libmachine: (no-preload-566777) Found IP for machine: 192.168.61.84
	I0729 13:37:33.297627  300746 main.go:141] libmachine: (no-preload-566777) Reserving static IP address...
	I0729 13:37:33.297639  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has current primary IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.298106  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.298146  300746 main.go:141] libmachine: (no-preload-566777) Reserved static IP address: 192.168.61.84
	I0729 13:37:33.298164  300746 main.go:141] libmachine: (no-preload-566777) DBG | skip adding static IP to network mk-no-preload-566777 - found existing host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"}
	I0729 13:37:33.298178  300746 main.go:141] libmachine: (no-preload-566777) DBG | Getting to WaitForSSH function...
	I0729 13:37:33.298194  300746 main.go:141] libmachine: (no-preload-566777) Waiting for SSH to be available...
	I0729 13:37:33.300310  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300618  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.300653  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300731  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH client type: external
	I0729 13:37:33.300773  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa (-rw-------)
	I0729 13:37:33.300826  300746 main.go:141] libmachine: (no-preload-566777) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:33.300957  300746 main.go:141] libmachine: (no-preload-566777) DBG | About to run SSH command:
	I0729 13:37:33.300985  300746 main.go:141] libmachine: (no-preload-566777) DBG | exit 0
	I0729 13:37:34.861481  301044 start.go:364] duration metric: took 4m23.064160625s to acquireMachinesLock for "default-k8s-diff-port-972693"
	I0729 13:37:34.861564  301044 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:34.861576  301044 fix.go:54] fixHost starting: 
	I0729 13:37:34.862021  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:34.862055  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:34.879106  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0729 13:37:34.879506  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:34.880050  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:37:34.880077  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:34.880423  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:34.880637  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:34.880838  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:37:34.882251  301044 fix.go:112] recreateIfNeeded on default-k8s-diff-port-972693: state=Stopped err=<nil>
	I0729 13:37:34.882284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	W0729 13:37:34.882465  301044 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:34.884611  301044 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-972693" ...
	I0729 13:37:33.420745  300746 main.go:141] libmachine: (no-preload-566777) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:33.421178  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetConfigRaw
	I0729 13:37:33.421861  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.424343  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.424680  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.424710  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.425061  300746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/config.json ...
	I0729 13:37:33.425244  300746 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:33.425262  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:33.425513  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.427708  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.427961  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.427989  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.428171  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.428354  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428528  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428672  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.428933  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.429139  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.429150  300746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:33.525027  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:33.525065  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525306  300746 buildroot.go:166] provisioning hostname "no-preload-566777"
	I0729 13:37:33.525340  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525551  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.528124  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528491  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.528529  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528677  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.528865  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529025  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529144  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.529286  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.529453  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.529465  300746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-566777 && echo "no-preload-566777" | sudo tee /etc/hostname
	I0729 13:37:33.638867  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-566777
	
	I0729 13:37:33.638902  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.641406  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641730  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.641762  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641908  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.642112  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642285  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642414  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.642555  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.642727  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.642743  300746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-566777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-566777/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-566777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:33.749760  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:33.749789  300746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:33.749812  300746 buildroot.go:174] setting up certificates
	I0729 13:37:33.749821  300746 provision.go:84] configureAuth start
	I0729 13:37:33.749831  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.750114  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.752924  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753241  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.753264  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753477  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.755385  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755681  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.755701  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755840  300746 provision.go:143] copyHostCerts
	I0729 13:37:33.755904  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:33.755926  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:33.756019  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:33.756156  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:33.756169  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:33.756206  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:33.756276  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:33.756286  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:33.756317  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:33.756380  300746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.no-preload-566777 san=[127.0.0.1 192.168.61.84 localhost minikube no-preload-566777]
	I0729 13:37:34.226953  300746 provision.go:177] copyRemoteCerts
	I0729 13:37:34.227033  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:34.227066  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.229542  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229816  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.229853  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.230177  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.230314  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.230452  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.310803  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:37:34.334545  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:37:34.357908  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:34.381163  300746 provision.go:87] duration metric: took 631.325967ms to configureAuth
	I0729 13:37:34.381200  300746 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:34.381441  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:37:34.381535  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.383985  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384286  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.384312  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384473  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.384681  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384862  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384995  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.385176  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.385393  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.385414  300746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:34.640587  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:34.640615  300746 machine.go:97] duration metric: took 1.215357318s to provisionDockerMachine
	I0729 13:37:34.640628  300746 start.go:293] postStartSetup for "no-preload-566777" (driver="kvm2")
	I0729 13:37:34.640645  300746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:34.640683  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.641067  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:34.641104  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.643711  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644066  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.644097  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644215  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.644398  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.644555  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.644677  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.723215  300746 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:34.727393  300746 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:34.727425  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:34.727507  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:34.727614  300746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:34.727770  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:34.736666  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:34.759678  300746 start.go:296] duration metric: took 119.034973ms for postStartSetup
	I0729 13:37:34.759716  300746 fix.go:56] duration metric: took 19.566140877s for fixHost
	I0729 13:37:34.759748  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.762103  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762468  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.762491  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762645  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.762843  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763008  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763111  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.763229  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.763392  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.763403  300746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:34.861306  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260254.835831305
	
	I0729 13:37:34.861333  300746 fix.go:216] guest clock: 1722260254.835831305
	I0729 13:37:34.861341  300746 fix.go:229] Guest: 2024-07-29 13:37:34.835831305 +0000 UTC Remote: 2024-07-29 13:37:34.759720831 +0000 UTC m=+296.387252495 (delta=76.110474ms)
	I0729 13:37:34.861376  300746 fix.go:200] guest clock delta is within tolerance: 76.110474ms
	I0729 13:37:34.861384  300746 start.go:83] releasing machines lock for "no-preload-566777", held for 19.66783585s
	I0729 13:37:34.861413  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.861708  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:34.864181  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864534  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.864567  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864757  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865296  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865467  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865546  300746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:34.865600  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.865726  300746 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:34.865753  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.868333  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868522  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868772  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868810  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868839  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868859  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868913  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869060  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869152  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869209  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869300  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869349  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869417  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.869551  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.970978  300746 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:34.978226  300746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:35.128653  300746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:35.134619  300746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:35.134688  300746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:35.150674  300746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:35.150697  300746 start.go:495] detecting cgroup driver to use...
	I0729 13:37:35.150762  300746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:35.166545  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:35.178859  300746 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:35.178913  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:35.197133  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:35.214430  300746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:35.337707  300746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:35.467057  300746 docker.go:233] disabling docker service ...
	I0729 13:37:35.467134  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:35.480960  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:35.493850  300746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:35.629455  300746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:35.741534  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:35.754886  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:35.773243  300746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 13:37:35.773323  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.783589  300746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:35.783673  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.794150  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.805389  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.816636  300746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:35.828027  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.838467  300746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.856470  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.866773  300746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:35.876110  300746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:35.876175  300746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:35.889768  300746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:35.909971  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:36.046023  300746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:36.192169  300746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:36.192238  300746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:36.197281  300746 start.go:563] Will wait 60s for crictl version
	I0729 13:37:36.197365  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.201359  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:36.248317  300746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:36.248420  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.276247  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.306549  300746 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 13:37:34.885944  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Start
	I0729 13:37:34.886114  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring networks are active...
	I0729 13:37:34.886856  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network default is active
	I0729 13:37:34.887211  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network mk-default-k8s-diff-port-972693 is active
	I0729 13:37:34.887684  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Getting domain xml...
	I0729 13:37:34.888427  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Creating domain...
	I0729 13:37:36.147265  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting to get IP...
	I0729 13:37:36.148095  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148547  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148616  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.148516  302181 retry.go:31] will retry after 191.117257ms: waiting for machine to come up
	I0729 13:37:36.340984  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341507  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.341444  302181 retry.go:31] will retry after 285.557329ms: waiting for machine to come up
	I0729 13:37:36.629066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629670  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.629621  302181 retry.go:31] will retry after 397.294163ms: waiting for machine to come up
	I0729 13:37:36.307930  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:36.311057  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311389  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:36.311417  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311699  300746 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:36.316257  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:36.330109  300746 kubeadm.go:883] updating cluster {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:36.330268  300746 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 13:37:36.330320  300746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:36.367218  300746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 13:37:36.367250  300746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:37:36.367327  300746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.367333  300746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.367394  300746 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 13:37:36.367404  300746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.367432  300746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.367353  300746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.367412  300746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.367743  300746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.369020  300746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.369125  300746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.369150  300746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.369203  300746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.369015  300746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.369484  300746 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 13:37:36.369609  300746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.369763  300746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.560256  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.600945  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.604476  300746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 13:37:36.604539  300746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.604592  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.606566  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 13:37:36.649109  300746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 13:37:36.649210  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.649212  300746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.649328  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.696863  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.698623  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.713816  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.727059  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.764110  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.764204  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.764208  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.784479  300746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 13:37:36.784542  300746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.784558  300746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 13:37:36.784597  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.784598  300746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.784694  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.813445  300746 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 13:37:36.813491  300746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.813544  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.825275  300746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 13:37:36.825290  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 13:37:36.825392  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825463  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825327  300746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.825515  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.852786  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.852866  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:36.852822  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.852843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.852984  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:37.587824  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:37.028009  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028349  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028378  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.028295  302181 retry.go:31] will retry after 507.597159ms: waiting for machine to come up
	I0729 13:37:37.538138  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538550  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538581  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.538507  302181 retry.go:31] will retry after 508.855087ms: waiting for machine to come up
	I0729 13:37:38.049628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050241  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050277  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.050198  302181 retry.go:31] will retry after 889.089993ms: waiting for machine to come up
	I0729 13:37:38.940541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941096  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.941009  302181 retry.go:31] will retry after 891.889885ms: waiting for machine to come up
	I0729 13:37:39.834956  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835395  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:39.835341  302181 retry.go:31] will retry after 1.030799215s: waiting for machine to come up
	I0729 13:37:40.867814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868336  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868367  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:40.868283  302181 retry.go:31] will retry after 1.40369357s: waiting for machine to come up
	I0729 13:37:38.870850  300746 ssh_runner.go:235] Completed: which crictl: (2.045307778s)
	I0729 13:37:38.870925  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:38.870921  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.045429354s)
	I0729 13:37:38.870946  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 13:37:38.871001  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0: (2.018116939s)
	I0729 13:37:38.871024  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.01808875s)
	I0729 13:37:38.871054  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871083  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.018080011s)
	I0729 13:37:38.871109  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 13:37:38.871120  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871056  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 13:37:38.871166  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871151  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871234  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0: (2.018278547s)
	I0729 13:37:38.871247  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:38.871259  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 13:37:38.871304  300746 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.283446632s)
	I0729 13:37:38.871343  300746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 13:37:38.871372  300746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:38.871406  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:38.871310  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:38.939395  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:38.939419  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 13:37:38.939532  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:40.939632  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.068434649s)
	I0729 13:37:40.939669  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 13:37:40.939693  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939702  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.068259157s)
	I0729 13:37:40.939734  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 13:37:40.939761  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939794  300746 ssh_runner.go:235] Completed: which crictl: (2.068372626s)
	I0729 13:37:40.939827  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.068564103s)
	I0729 13:37:40.939843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:40.939844  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.000295325s)
	I0729 13:37:40.939847  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 13:37:40.939856  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 13:37:40.999406  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 13:37:40.999505  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:43.015187  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.075399061s)
	I0729 13:37:43.015226  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 13:37:43.015243  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.015694914s)
	I0729 13:37:43.015259  300746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:43.015279  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 13:37:43.015313  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:42.273822  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274326  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:42.274251  302181 retry.go:31] will retry after 2.255017939s: waiting for machine to come up
	I0729 13:37:44.531432  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531845  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531873  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:44.531801  302181 retry.go:31] will retry after 2.272405743s: waiting for machine to come up
	I0729 13:37:46.401061  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.385713069s)
	I0729 13:37:46.401109  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 13:37:46.401147  300746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:46.401207  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:48.358628  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.9573934s)
	I0729 13:37:48.358659  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 13:37:48.358682  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:48.358733  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:46.806043  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806654  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806681  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:46.806599  302181 retry.go:31] will retry after 2.212726673s: waiting for machine to come up
	I0729 13:37:49.022244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022732  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022770  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:49.022677  302181 retry.go:31] will retry after 3.071460325s: waiting for machine to come up
	I0729 13:37:50.216727  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.857925776s)
	I0729 13:37:50.216769  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 13:37:50.216822  300746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.216879  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.862685  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 13:37:50.862738  300746 cache_images.go:123] Successfully loaded all cached images
	I0729 13:37:50.862746  300746 cache_images.go:92] duration metric: took 14.49548231s to LoadCachedImages
	I0729 13:37:50.862763  300746 kubeadm.go:934] updating node { 192.168.61.84 8443 v1.31.0-beta.0 crio true true} ...
	I0729 13:37:50.862924  300746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-566777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:50.863021  300746 ssh_runner.go:195] Run: crio config
	I0729 13:37:50.911526  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:50.911551  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:50.911563  300746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:50.911593  300746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.84 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-566777 NodeName:no-preload-566777 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:50.911782  300746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-566777"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:50.911856  300746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 13:37:50.922091  300746 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:50.922162  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:50.931275  300746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 13:37:50.947494  300746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 13:37:50.963108  300746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 13:37:50.979666  300746 ssh_runner.go:195] Run: grep 192.168.61.84	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:50.983215  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:50.994627  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:51.117275  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:51.134412  300746 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777 for IP: 192.168.61.84
	I0729 13:37:51.134439  300746 certs.go:194] generating shared ca certs ...
	I0729 13:37:51.134461  300746 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:51.134641  300746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:51.134692  300746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:51.134703  300746 certs.go:256] generating profile certs ...
	I0729 13:37:51.134825  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/client.key
	I0729 13:37:51.134901  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key.445c667e
	I0729 13:37:51.134962  300746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key
	I0729 13:37:51.135114  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:51.135153  300746 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:51.135166  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:51.135196  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:51.135225  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:51.135256  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:51.135309  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:51.136036  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:51.169507  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:51.201916  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:51.227860  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:51.263617  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:37:51.288105  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:37:51.314837  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:51.343892  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:37:51.367328  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:51.389470  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:51.411446  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:51.433270  300746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:51.448939  300746 ssh_runner.go:195] Run: openssl version
	I0729 13:37:51.454475  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:51.465080  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469541  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469605  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.475366  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:51.485979  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:51.496382  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500511  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500571  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.505997  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:51.516733  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:51.527637  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531754  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531797  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.537237  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:51.548006  300746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:51.552581  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:51.558414  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:51.563879  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:51.569869  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:51.575800  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:37:51.581525  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:37:51.587642  300746 kubeadm.go:392] StartCluster: {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:37:51.587777  300746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:37:51.587828  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.627118  300746 cri.go:89] found id: ""
	I0729 13:37:51.627212  300746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:37:51.637686  300746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:37:51.637711  300746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:37:51.637765  300746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:37:51.647368  300746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:37:51.648291  300746 kubeconfig.go:125] found "no-preload-566777" server: "https://192.168.61.84:8443"
	I0729 13:37:51.650296  300746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:37:51.659616  300746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.84
	I0729 13:37:51.659649  300746 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:37:51.659663  300746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:37:51.659714  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.700636  300746 cri.go:89] found id: ""
	I0729 13:37:51.700703  300746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:37:51.718225  300746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:37:51.728237  300746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:37:51.728257  300746 kubeadm.go:157] found existing configuration files:
	
	I0729 13:37:51.728303  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:37:51.738280  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:37:51.738364  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:37:51.748770  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:37:51.758572  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:37:51.758649  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:37:51.769634  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.779757  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:37:51.779827  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.790745  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:37:51.801212  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:37:51.801275  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:37:51.811706  300746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:37:51.821251  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:51.933905  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.401823  301425 start.go:364] duration metric: took 3m42.323534375s to acquireMachinesLock for "old-k8s-version-924039"
	I0729 13:37:53.401902  301425 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:53.401914  301425 fix.go:54] fixHost starting: 
	I0729 13:37:53.402310  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:53.402344  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:53.421973  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0729 13:37:53.422456  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:53.423079  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:37:53.423112  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:53.423508  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:53.423734  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:37:53.423883  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetState
	I0729 13:37:53.425687  301425 fix.go:112] recreateIfNeeded on old-k8s-version-924039: state=Stopped err=<nil>
	I0729 13:37:53.425733  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	W0729 13:37:53.425902  301425 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:53.427931  301425 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-924039" ...
	I0729 13:37:52.097443  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.097870  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Found IP for machine: 192.168.50.34
	I0729 13:37:52.097904  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserving static IP address...
	I0729 13:37:52.097923  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has current primary IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.098329  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.098357  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserved static IP address: 192.168.50.34
	I0729 13:37:52.098377  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | skip adding static IP to network mk-default-k8s-diff-port-972693 - found existing host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"}
	I0729 13:37:52.098406  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for SSH to be available...
	I0729 13:37:52.098423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Getting to WaitForSSH function...
	I0729 13:37:52.100530  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.100878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.100908  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.101029  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH client type: external
	I0729 13:37:52.101062  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa (-rw-------)
	I0729 13:37:52.101106  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:52.101121  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | About to run SSH command:
	I0729 13:37:52.101145  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | exit 0
	I0729 13:37:52.225041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:52.225381  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetConfigRaw
	I0729 13:37:52.226001  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.228722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229109  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.229140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229315  301044 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/config.json ...
	I0729 13:37:52.229522  301044 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:52.229541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:52.229716  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.231823  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.232181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.232446  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232613  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232758  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.232913  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.233100  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.233111  301044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:52.336948  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:52.336978  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337288  301044 buildroot.go:166] provisioning hostname "default-k8s-diff-port-972693"
	I0729 13:37:52.337321  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337552  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.340284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340598  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.340623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340724  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.340913  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341090  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341261  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.341419  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.341591  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.341603  301044 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-972693 && echo "default-k8s-diff-port-972693" | sudo tee /etc/hostname
	I0729 13:37:52.455264  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-972693
	
	I0729 13:37:52.455294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.457937  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458304  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.458332  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458465  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.458667  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458857  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458995  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.459170  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.459352  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.459376  301044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-972693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-972693/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-972693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:52.570543  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:52.570578  301044 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:52.570603  301044 buildroot.go:174] setting up certificates
	I0729 13:37:52.570617  301044 provision.go:84] configureAuth start
	I0729 13:37:52.570628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.570900  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.573309  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573609  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.573641  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573751  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.575826  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.576177  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576344  301044 provision.go:143] copyHostCerts
	I0729 13:37:52.576414  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:52.576483  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:52.576568  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:52.576698  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:52.576707  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:52.576728  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:52.576786  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:52.576815  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:52.576845  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:52.576902  301044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-972693 san=[127.0.0.1 192.168.50.34 default-k8s-diff-port-972693 localhost minikube]
	I0729 13:37:52.764928  301044 provision.go:177] copyRemoteCerts
	I0729 13:37:52.764988  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:52.765018  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.767540  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.767842  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.767872  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.768041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.768213  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.768362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.768474  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:52.847615  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:52.877666  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 13:37:52.901219  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:37:52.924922  301044 provision.go:87] duration metric: took 354.279838ms to configureAuth
	I0729 13:37:52.924953  301044 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:52.925157  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:37:52.925244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.927791  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.928181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.928533  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928830  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.928978  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.929208  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.929230  301044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:53.176359  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:53.176391  301044 machine.go:97] duration metric: took 946.853063ms to provisionDockerMachine
	I0729 13:37:53.176404  301044 start.go:293] postStartSetup for "default-k8s-diff-port-972693" (driver="kvm2")
	I0729 13:37:53.176419  301044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:53.176441  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.176782  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:53.176818  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.179340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.179698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179858  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.180053  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.180214  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.180336  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.259826  301044 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:53.264059  301044 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:53.264087  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:53.264155  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:53.264239  301044 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:53.264345  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:53.273954  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:53.297340  301044 start.go:296] duration metric: took 120.913486ms for postStartSetup
	I0729 13:37:53.297392  301044 fix.go:56] duration metric: took 18.435815853s for fixHost
	I0729 13:37:53.297421  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.299859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300187  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.300218  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.300576  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300755  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300932  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.301116  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:53.301314  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:53.301324  301044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:53.401628  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260273.369344581
	
	I0729 13:37:53.401671  301044 fix.go:216] guest clock: 1722260273.369344581
	I0729 13:37:53.401682  301044 fix.go:229] Guest: 2024-07-29 13:37:53.369344581 +0000 UTC Remote: 2024-07-29 13:37:53.297397345 +0000 UTC m=+281.644280810 (delta=71.947236ms)
	I0729 13:37:53.401705  301044 fix.go:200] guest clock delta is within tolerance: 71.947236ms
	I0729 13:37:53.401711  301044 start.go:83] releasing machines lock for "default-k8s-diff-port-972693", held for 18.540175489s
	I0729 13:37:53.401760  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.402061  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:53.404813  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405182  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.405207  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405359  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.405844  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406153  301044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:53.406210  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.406289  301044 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:53.406315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.409060  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409351  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409460  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.409814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.409878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409909  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409992  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410092  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.410183  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.410315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.410435  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410631  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.510289  301044 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:53.517635  301044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:53.660575  301044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:53.668128  301044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:53.668207  301044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:53.690732  301044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:53.690764  301044 start.go:495] detecting cgroup driver to use...
	I0729 13:37:53.690838  301044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:53.707461  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:53.721922  301044 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:53.722004  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:53.740941  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:53.759323  301044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:53.900344  301044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:54.065647  301044 docker.go:233] disabling docker service ...
	I0729 13:37:54.065780  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:54.082468  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:54.098283  301044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:54.213104  301044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:54.339560  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:54.360412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:54.384836  301044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:37:54.384900  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.400889  301044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:54.400980  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.416941  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.433090  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.449306  301044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:54.461742  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.477135  301044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.501431  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.519646  301044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:54.532995  301044 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:54.533074  301044 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:54.550639  301044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:54.561896  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:54.710789  301044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:54.885480  301044 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:54.885558  301044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:54.890556  301044 start.go:563] Will wait 60s for crictl version
	I0729 13:37:54.890629  301044 ssh_runner.go:195] Run: which crictl
	I0729 13:37:54.894644  301044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:54.941141  301044 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:54.941236  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:54.983380  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:55.027770  301044 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:37:53.429298  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .Start
	I0729 13:37:53.429471  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring networks are active...
	I0729 13:37:53.430263  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network default is active
	I0729 13:37:53.430649  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network mk-old-k8s-version-924039 is active
	I0729 13:37:53.431011  301425 main.go:141] libmachine: (old-k8s-version-924039) Getting domain xml...
	I0729 13:37:53.431825  301425 main.go:141] libmachine: (old-k8s-version-924039) Creating domain...
	I0729 13:37:54.749878  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting to get IP...
	I0729 13:37:54.751148  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.751716  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.751784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.751696  302377 retry.go:31] will retry after 230.330776ms: waiting for machine to come up
	I0729 13:37:54.984551  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.985138  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.985183  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.985094  302377 retry.go:31] will retry after 291.000555ms: waiting for machine to come up
	I0729 13:37:55.277730  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.278199  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.278220  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.278152  302377 retry.go:31] will retry after 360.474919ms: waiting for machine to come up
	I0729 13:37:55.640675  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.641255  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.641288  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.641207  302377 retry.go:31] will retry after 480.424143ms: waiting for machine to come up
	I0729 13:37:55.029239  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:55.032722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033225  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:55.033257  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033668  301044 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:55.038429  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:55.056198  301044 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:55.056373  301044 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:37:55.056440  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:55.100534  301044 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:37:55.100612  301044 ssh_runner.go:195] Run: which lz4
	I0729 13:37:55.105708  301044 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:37:55.110384  301044 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:37:55.110417  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:37:56.630726  301044 crio.go:462] duration metric: took 1.525047583s to copy over tarball
	I0729 13:37:56.630816  301044 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:37:53.446825  300746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.51288234s)
	I0729 13:37:53.446866  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.663105  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.740482  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.823641  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:37:53.823753  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.324001  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.824299  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.933931  300746 api_server.go:72] duration metric: took 1.11028623s to wait for apiserver process to appear ...
	I0729 13:37:54.933969  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:37:54.933996  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:54.934563  300746 api_server.go:269] stopped: https://192.168.61.84:8443/healthz: Get "https://192.168.61.84:8443/healthz": dial tcp 192.168.61.84:8443: connect: connection refused
	I0729 13:37:55.434598  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.005676  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.005719  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.005737  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.066371  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.066408  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.434268  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.439205  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.439240  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:58.934796  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.944368  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.944399  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.434576  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.443061  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:59.443098  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.934805  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.943892  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:37:59.955156  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:37:59.955185  300746 api_server.go:131] duration metric: took 5.021207326s to wait for apiserver health ...
	I0729 13:37:59.955197  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.955205  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:00.307264  300746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:37:56.123854  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.124460  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.124487  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.124433  302377 retry.go:31] will retry after 529.614291ms: waiting for machine to come up
	I0729 13:37:56.656136  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.656626  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.656657  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.656599  302377 retry.go:31] will retry after 794.429248ms: waiting for machine to come up
	I0729 13:37:57.452523  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:57.453001  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:57.453033  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:57.452952  302377 retry.go:31] will retry after 1.140583184s: waiting for machine to come up
	I0729 13:37:58.594636  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:58.595067  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:58.595109  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:58.595024  302377 retry.go:31] will retry after 894.563974ms: waiting for machine to come up
	I0729 13:37:59.491447  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:59.492094  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:59.492120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:59.491993  302377 retry.go:31] will retry after 1.145531829s: waiting for machine to come up
	I0729 13:38:00.639387  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:00.639807  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:00.639838  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:00.639754  302377 retry.go:31] will retry after 1.949675091s: waiting for machine to come up
	I0729 13:37:58.983188  301044 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.352336314s)
	I0729 13:37:58.983233  301044 crio.go:469] duration metric: took 2.352468802s to extract the tarball
	I0729 13:37:58.983245  301044 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:37:59.022539  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:59.086881  301044 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:37:59.086913  301044 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:37:59.086924  301044 kubeadm.go:934] updating node { 192.168.50.34 8444 v1.30.3 crio true true} ...
	I0729 13:37:59.087062  301044 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-972693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:59.087158  301044 ssh_runner.go:195] Run: crio config
	I0729 13:37:59.144128  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.144163  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:59.144182  301044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:59.144209  301044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.34 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-972693 NodeName:default-k8s-diff-port-972693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:59.144376  301044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.34
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-972693"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:59.144452  301044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:37:59.154648  301044 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:59.154717  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:59.164572  301044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0729 13:37:59.182967  301044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:37:59.202507  301044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0729 13:37:59.221603  301044 ssh_runner.go:195] Run: grep 192.168.50.34	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:59.226646  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:59.244199  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:59.390312  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:59.411152  301044 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693 for IP: 192.168.50.34
	I0729 13:37:59.411178  301044 certs.go:194] generating shared ca certs ...
	I0729 13:37:59.411213  301044 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:59.411421  301044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:59.411481  301044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:59.411495  301044 certs.go:256] generating profile certs ...
	I0729 13:37:59.411614  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/client.key
	I0729 13:37:59.411709  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key.0cff1f82
	I0729 13:37:59.411780  301044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key
	I0729 13:37:59.411977  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:59.412036  301044 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:59.412052  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:59.412090  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:59.412124  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:59.412156  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:59.412221  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:59.413262  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:59.450186  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:59.496339  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:59.535462  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:59.569433  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 13:37:59.602826  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:37:59.639581  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:59.672966  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:37:59.707007  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:59.741894  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:59.771364  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:59.802928  301044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:59.828730  301044 ssh_runner.go:195] Run: openssl version
	I0729 13:37:59.837356  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:59.855071  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861707  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861781  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.870815  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:59.884842  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:59.899473  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904238  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904312  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.910221  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:59.923542  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:59.936729  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943440  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943496  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.951099  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:59.964578  301044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:59.969476  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:59.975715  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:59.981719  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:59.987788  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:59.993753  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:00.000228  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:00.007898  301044 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:00.008033  301044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:00.008091  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.054999  301044 cri.go:89] found id: ""
	I0729 13:38:00.055097  301044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:00.069066  301044 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:00.069090  301044 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:00.069148  301044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:00.083486  301044 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:00.084538  301044 kubeconfig.go:125] found "default-k8s-diff-port-972693" server: "https://192.168.50.34:8444"
	I0729 13:38:00.086623  301044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:00.099514  301044 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.34
	I0729 13:38:00.099555  301044 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:00.099570  301044 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:00.099644  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.137643  301044 cri.go:89] found id: ""
	I0729 13:38:00.137726  301044 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:00.157036  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:00.168591  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:00.168614  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:00.168664  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:38:00.178379  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:00.178449  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:00.189688  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:38:00.199323  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:00.199388  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:00.209351  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.219100  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:00.219171  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.228754  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:38:00.238453  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:00.238526  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:00.248479  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:00.258717  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.377121  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.413128  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:00.424610  300746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:00.446537  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:01.601214  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:01.601265  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:01.601278  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:01.601296  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:01.601305  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:01.601312  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:38:01.601323  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:01.601332  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:01.601346  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:01.601357  300746 system_pods.go:74] duration metric: took 1.154789909s to wait for pod list to return data ...
	I0729 13:38:01.601370  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:02.057111  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:02.057149  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:02.057182  300746 node_conditions.go:105] duration metric: took 455.806302ms to run NodePressure ...
	I0729 13:38:02.057210  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.420014  300746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426444  300746 kubeadm.go:739] kubelet initialised
	I0729 13:38:02.426467  300746 kubeadm.go:740] duration metric: took 6.420611ms waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426478  300746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:02.431168  300746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.436892  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436916  300746 pod_ready.go:81] duration metric: took 5.728016ms for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.436925  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436932  300746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.443079  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443102  300746 pod_ready.go:81] duration metric: took 6.163444ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.443110  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443115  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.447945  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447964  300746 pod_ready.go:81] duration metric: took 4.843364ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.447973  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447980  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.457004  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457027  300746 pod_ready.go:81] duration metric: took 9.037058ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.457038  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457045  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.825208  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825246  300746 pod_ready.go:81] duration metric: took 368.180356ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.825259  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825268  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.225868  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.225975  300746 pod_ready.go:81] duration metric: took 400.697293ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.225993  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.226003  300746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.627568  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627605  300746 pod_ready.go:81] duration metric: took 401.589314ms for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.627618  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627628  300746 pod_ready.go:38] duration metric: took 1.201138036s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:03.627651  300746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:03.646855  300746 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:03.646893  300746 kubeadm.go:597] duration metric: took 12.009173344s to restartPrimaryControlPlane
	I0729 13:38:03.646910  300746 kubeadm.go:394] duration metric: took 12.059279913s to StartCluster
	I0729 13:38:03.646936  300746 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.647029  300746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:03.649213  300746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.649527  300746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:03.649810  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:38:03.649861  300746 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:03.649931  300746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-566777"
	I0729 13:38:03.649962  300746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-566777"
	W0729 13:38:03.649974  300746 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:03.650021  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650400  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.650428  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.650493  300746 addons.go:69] Setting default-storageclass=true in profile "no-preload-566777"
	I0729 13:38:03.650533  300746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-566777"
	I0729 13:38:03.650601  300746 addons.go:69] Setting metrics-server=true in profile "no-preload-566777"
	I0729 13:38:03.650631  300746 addons.go:234] Setting addon metrics-server=true in "no-preload-566777"
	W0729 13:38:03.650642  300746 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:03.650675  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650985  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651014  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651029  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651054  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651324  300746 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:03.652887  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:03.670088  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0729 13:38:03.670283  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0729 13:38:03.670694  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.670769  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.671418  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671423  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671437  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671440  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671755  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0729 13:38:03.671900  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.671927  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.672491  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.672515  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.672711  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.673183  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.673207  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.673468  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.673480  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.673857  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.674012  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.677726  300746 addons.go:234] Setting addon default-storageclass=true in "no-preload-566777"
	W0729 13:38:03.677746  300746 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:03.677777  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.678133  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.678151  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.692817  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0729 13:38:03.693446  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.693919  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.693945  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.694335  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.694504  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.694718  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0729 13:38:03.695225  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.695726  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.695744  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.696028  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.696154  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.696514  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.697635  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.698597  300746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:03.699466  300746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:03.700447  300746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:03.700463  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:03.700481  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.701375  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:03.701390  300746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:03.701404  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.705199  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705225  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705844  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705866  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705893  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705911  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705946  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706143  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706313  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.706471  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.706755  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.708988  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.710193  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I0729 13:38:03.710735  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.711282  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.711296  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.711684  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.712271  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.712322  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.712966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.713103  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.756710  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I0729 13:38:03.757254  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.757760  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.757784  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.758125  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.758376  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.760315  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.760577  300746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:03.760594  300746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:03.760612  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.763679  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.764208  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.764277  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.765045  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.765227  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.765386  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.765546  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.883257  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:03.905104  300746 node_ready.go:35] waiting up to 6m0s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:03.985382  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:03.985412  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:04.014094  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:04.014119  300746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:04.016390  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:04.047695  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:04.062249  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:04.062328  300746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:04.095999  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:05.473341  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4569173s)
	I0729 13:38:05.473396  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473409  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.473421  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.425688075s)
	I0729 13:38:05.473547  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473558  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474089  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.474117  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474129  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474133  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474137  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474142  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474158  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474148  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474213  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.475707  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.475738  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.475746  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.476002  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.476095  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.476124  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.490038  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.490081  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.490420  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.490440  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562064  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46596112s)
	I0729 13:38:05.562122  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562136  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.562492  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.562516  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562532  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562541  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.564397  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.564410  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.564448  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.564471  300746 addons.go:475] Verifying addon metrics-server=true in "no-preload-566777"
	I0729 13:38:05.566888  300746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:38:02.590640  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:02.591134  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:02.591162  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:02.591087  302377 retry.go:31] will retry after 1.765945358s: waiting for machine to come up
	I0729 13:38:04.358332  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:04.358934  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:04.358963  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:04.358899  302377 retry.go:31] will retry after 2.923224015s: waiting for machine to come up
	I0729 13:38:01.713425  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.33625836s)
	I0729 13:38:01.713462  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:01.941164  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.017707  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.134991  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:02.135105  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:02.636248  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.135563  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.264470  301044 api_server.go:72] duration metric: took 1.129485078s to wait for apiserver process to appear ...
	I0729 13:38:03.264512  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:03.264545  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.392570  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.392609  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.392626  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.423076  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.423120  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.764837  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.770393  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:06.770428  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.264879  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.269632  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:07.269670  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.764878  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.770291  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:38:07.781660  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:07.781691  301044 api_server.go:131] duration metric: took 4.517171532s to wait for apiserver health ...
	I0729 13:38:07.781700  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:38:07.781707  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:07.784769  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:05.568441  300746 addons.go:510] duration metric: took 1.918571396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:38:05.916109  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:07.284234  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:07.284764  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:07.284819  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:07.284694  302377 retry.go:31] will retry after 2.9786525s: waiting for machine to come up
	I0729 13:38:10.265771  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:10.266128  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:10.266161  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:10.266077  302377 retry.go:31] will retry after 5.044155966s: waiting for machine to come up
	I0729 13:38:07.786038  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:07.824838  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:07.850139  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:07.862900  301044 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:07.862932  301044 system_pods.go:61] "coredns-7db6d8ff4d-zllk5" [3ebb659a-7849-498b-a81c-54f75c8e1536] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:07.862943  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [fc5c7286-5cd4-4eeb-879e-6263f82c4164] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:07.862950  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [a3a13c0b-844d-4a5b-93a0-fb9784b4b095] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:07.862957  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4e6c469d-b2a5-4ec2-95a4-01b6ad7de347] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:07.862964  301044 system_pods.go:61] "kube-proxy-6hxkb" [42b01d8b-9a37-40d0-ac32-09e3e261f953] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:07.862979  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [2373a650-57bb-4dc3-96ab-7f6cd040c148] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:07.862985  301044 system_pods.go:61] "metrics-server-569cc877fc-dlrjb" [360087fa-273d-4ba8-a299-54678724c45e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:07.862990  301044 system_pods.go:61] "storage-provisioner" [3e3fb5ef-6761-4671-a093-8616241cd98f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:07.862996  301044 system_pods.go:74] duration metric: took 12.833023ms to wait for pod list to return data ...
	I0729 13:38:07.863007  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:07.868359  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:07.868385  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:07.868395  301044 node_conditions.go:105] duration metric: took 5.383164ms to run NodePressure ...
	I0729 13:38:07.868412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:08.166890  301044 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175546  301044 kubeadm.go:739] kubelet initialised
	I0729 13:38:08.175570  301044 kubeadm.go:740] duration metric: took 8.646638ms waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175588  301044 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.186944  301044 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.194446  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194479  301044 pod_ready.go:81] duration metric: took 7.500494ms for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.194487  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194495  301044 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.202341  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202366  301044 pod_ready.go:81] duration metric: took 7.863125ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.202380  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202388  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.209017  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209041  301044 pod_ready.go:81] duration metric: took 6.646309ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.209051  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209057  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.256503  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256530  301044 pod_ready.go:81] duration metric: took 47.465005ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.256543  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256552  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652875  301044 pod_ready.go:92] pod "kube-proxy-6hxkb" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:08.652901  301044 pod_ready.go:81] duration metric: took 396.340654ms for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652912  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.658352  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:08.411629  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:08.908602  300746 node_ready.go:49] node "no-preload-566777" has status "Ready":"True"
	I0729 13:38:08.908629  300746 node_ready.go:38] duration metric: took 5.003487604s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:08.908639  300746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.914468  300746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.921796  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.313102  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313621  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has current primary IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313650  301425 main.go:141] libmachine: (old-k8s-version-924039) Found IP for machine: 192.168.39.227
	I0729 13:38:15.313665  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserving static IP address...
	I0729 13:38:15.314120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.314168  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | skip adding static IP to network mk-old-k8s-version-924039 - found existing host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"}
	I0729 13:38:15.314187  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserved static IP address: 192.168.39.227
	I0729 13:38:15.314205  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting for SSH to be available...
	I0729 13:38:15.314219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Getting to WaitForSSH function...
	I0729 13:38:15.316468  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316779  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.316827  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316994  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH client type: external
	I0729 13:38:15.317013  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa (-rw-------)
	I0729 13:38:15.317042  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:15.317054  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | About to run SSH command:
	I0729 13:38:15.317076  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | exit 0
	I0729 13:38:15.444818  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:15.445203  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetConfigRaw
	I0729 13:38:15.445858  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.448296  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.448784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.448834  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.449028  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:38:15.449208  301425 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:15.449226  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:15.449469  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.451695  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452017  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.452046  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.452420  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452606  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452770  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.452945  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.453151  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.453165  301425 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:15.561558  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:15.561590  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.561859  301425 buildroot.go:166] provisioning hostname "old-k8s-version-924039"
	I0729 13:38:15.561887  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.562079  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.564776  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565116  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.565157  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565286  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.565495  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565669  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565805  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.565952  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.566129  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.566140  301425 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-924039 && echo "old-k8s-version-924039" | sudo tee /etc/hostname
	I0729 13:38:15.687712  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-924039
	
	I0729 13:38:15.687744  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.690289  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690614  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.690638  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690864  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.691104  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691290  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691463  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.691649  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.691841  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.691869  301425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-924039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-924039/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-924039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:15.814102  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:15.814140  301425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:15.814190  301425 buildroot.go:174] setting up certificates
	I0729 13:38:15.814198  301425 provision.go:84] configureAuth start
	I0729 13:38:15.814210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.814521  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.817140  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817548  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.817583  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817728  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.819957  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820307  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.820335  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820476  301425 provision.go:143] copyHostCerts
	I0729 13:38:15.820529  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:15.820539  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:15.820592  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:15.820685  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:15.820693  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:15.820713  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:15.820772  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:15.820779  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:15.820828  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:15.820909  301425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-924039 san=[127.0.0.1 192.168.39.227 localhost minikube old-k8s-version-924039]
	I0729 13:38:15.895797  301425 provision.go:177] copyRemoteCerts
	I0729 13:38:15.895866  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:15.895898  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.898774  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899173  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.899214  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899444  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.899672  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.899882  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.900048  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.606081  300705 start.go:364] duration metric: took 56.40993179s to acquireMachinesLock for "embed-certs-135920"
	I0729 13:38:16.606131  300705 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:38:16.606139  300705 fix.go:54] fixHost starting: 
	I0729 13:38:16.606611  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:16.606652  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:16.626502  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
	I0729 13:38:16.626989  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:16.627491  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:16.627511  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:16.627897  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:16.628100  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:16.628242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:16.629856  300705 fix.go:112] recreateIfNeeded on embed-certs-135920: state=Stopped err=<nil>
	I0729 13:38:16.629879  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	W0729 13:38:16.630046  300705 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:38:16.632177  300705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-135920" ...
	I0729 13:38:12.659133  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.159457  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.159792  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.159818  301044 pod_ready.go:81] duration metric: took 7.506898395s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.159827  301044 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.633625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Start
	I0729 13:38:16.633803  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring networks are active...
	I0729 13:38:16.634580  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network default is active
	I0729 13:38:16.634947  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network mk-embed-certs-135920 is active
	I0729 13:38:16.635454  300705 main.go:141] libmachine: (embed-certs-135920) Getting domain xml...
	I0729 13:38:16.636201  300705 main.go:141] libmachine: (embed-certs-135920) Creating domain...
	I0729 13:38:15.988091  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:16.019058  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 13:38:16.047266  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:16.072992  301425 provision.go:87] duration metric: took 258.777499ms to configureAuth
	I0729 13:38:16.073029  301425 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:16.073250  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:38:16.073338  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.075801  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.076219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076350  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.076560  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076750  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076972  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.077169  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.077354  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.077369  301425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:16.357614  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:16.357650  301425 machine.go:97] duration metric: took 908.424232ms to provisionDockerMachine
	I0729 13:38:16.357666  301425 start.go:293] postStartSetup for "old-k8s-version-924039" (driver="kvm2")
	I0729 13:38:16.357680  301425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:16.357706  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.358060  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:16.358089  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.360841  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361257  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.361314  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361410  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.361645  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.361821  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.361987  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.448673  301425 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:16.453435  301425 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:16.453461  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:16.453543  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:16.453638  301425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:16.453763  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:16.464185  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:16.490358  301425 start.go:296] duration metric: took 132.675687ms for postStartSetup
	I0729 13:38:16.490422  301425 fix.go:56] duration metric: took 23.088507704s for fixHost
	I0729 13:38:16.490450  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.493249  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493571  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.493612  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493781  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.494046  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494241  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494388  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.494561  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.494759  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.494769  301425 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:16.605903  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260296.583363181
	
	I0729 13:38:16.605930  301425 fix.go:216] guest clock: 1722260296.583363181
	I0729 13:38:16.605940  301425 fix.go:229] Guest: 2024-07-29 13:38:16.583363181 +0000 UTC Remote: 2024-07-29 13:38:16.490427183 +0000 UTC m=+245.556685019 (delta=92.935998ms)
	I0729 13:38:16.605967  301425 fix.go:200] guest clock delta is within tolerance: 92.935998ms
	I0729 13:38:16.605974  301425 start.go:83] releasing machines lock for "old-k8s-version-924039", held for 23.204101255s
	I0729 13:38:16.606006  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.606296  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:16.609324  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609669  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.609701  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609826  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610328  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610516  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610589  301425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:16.610673  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.610758  301425 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:16.610786  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.613356  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613639  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613689  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.613712  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613910  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614092  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.614112  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.614122  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614287  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614307  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614449  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.614496  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614635  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614771  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.719174  301425 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:16.726348  301425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:16.880130  301425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:16.886410  301425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:16.886484  301425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:16.904120  301425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:16.904151  301425 start.go:495] detecting cgroup driver to use...
	I0729 13:38:16.904222  301425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:16.927036  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:16.947380  301425 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:16.947448  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:16.964612  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:16.979266  301425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:17.108950  301425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:17.263118  301425 docker.go:233] disabling docker service ...
	I0729 13:38:17.263192  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:17.282563  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:17.299473  301425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:17.448598  301425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:17.568025  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:17.583700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:17.603159  301425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 13:38:17.603223  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.615655  301425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:17.615728  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.628639  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.640456  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.652160  301425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:17.663864  301425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:17.675293  301425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:17.675361  301425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:17.690427  301425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:17.702163  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:17.831401  301425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:17.985760  301425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:17.985851  301425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:17.990740  301425 start.go:563] Will wait 60s for crictl version
	I0729 13:38:17.990798  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:17.994741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:18.035793  301425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:18.035889  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.065036  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.097441  301425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 13:38:13.421995  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.944090  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.933596  300746 pod_ready.go:92] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.933621  300746 pod_ready.go:81] duration metric: took 8.019124005s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.933634  300746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943434  300746 pod_ready.go:92] pod "etcd-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.943465  300746 pod_ready.go:81] duration metric: took 9.816863ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943478  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952623  300746 pod_ready.go:92] pod "kube-apiserver-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.952644  300746 pod_ready.go:81] duration metric: took 9.157998ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952653  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.956989  300746 pod_ready.go:92] pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.957010  300746 pod_ready.go:81] duration metric: took 4.350015ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.957023  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962772  300746 pod_ready.go:92] pod "kube-proxy-ql6wf" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.962796  300746 pod_ready.go:81] duration metric: took 5.763769ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962807  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318604  300746 pod_ready.go:92] pod "kube-scheduler-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:17.318632  300746 pod_ready.go:81] duration metric: took 355.816982ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318642  300746 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:18.098840  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:18.102182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102629  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:18.102665  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102925  301425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:18.107544  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:18.122039  301425 kubeadm.go:883] updating cluster {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:18.122176  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:38:18.122249  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:18.169198  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:18.169279  301425 ssh_runner.go:195] Run: which lz4
	I0729 13:38:18.173861  301425 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:18.178840  301425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:18.178881  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 13:38:19.887360  301425 crio.go:462] duration metric: took 1.713549828s to copy over tarball
	I0729 13:38:19.887450  301425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:18.167033  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:20.168009  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:17.933984  300705 main.go:141] libmachine: (embed-certs-135920) Waiting to get IP...
	I0729 13:38:17.935033  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:17.935595  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:17.935652  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:17.935560  302586 retry.go:31] will retry after 195.331915ms: waiting for machine to come up
	I0729 13:38:18.133074  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.133566  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.133592  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.133513  302586 retry.go:31] will retry after 348.993714ms: waiting for machine to come up
	I0729 13:38:18.484164  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.484746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.484771  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.484703  302586 retry.go:31] will retry after 372.899167ms: waiting for machine to come up
	I0729 13:38:18.859212  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.859721  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.859746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.859672  302586 retry.go:31] will retry after 415.38859ms: waiting for machine to come up
	I0729 13:38:19.276241  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.276785  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.276816  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.276715  302586 retry.go:31] will retry after 553.262343ms: waiting for machine to come up
	I0729 13:38:19.831475  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.831994  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.832030  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.831949  302586 retry.go:31] will retry after 579.574559ms: waiting for machine to come up
	I0729 13:38:20.412838  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:20.413273  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:20.413302  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:20.413225  302586 retry.go:31] will retry after 908.712618ms: waiting for machine to come up
	I0729 13:38:21.324197  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:21.324824  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:21.324849  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:21.324723  302586 retry.go:31] will retry after 1.4226484s: waiting for machine to come up
	I0729 13:38:19.328753  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:21.330005  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.836067  301425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.948583188s)
	I0729 13:38:22.836104  301425 crio.go:469] duration metric: took 2.948710335s to extract the tarball
	I0729 13:38:22.836114  301425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:22.878370  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:22.921339  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:22.921370  301425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:38:22.921445  301425 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.921545  301425 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.921547  301425 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 13:38:22.921633  301425 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:22.921475  301425 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.921479  301425 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923052  301425 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 13:38:22.923712  301425 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.923723  301425 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923733  301425 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.923743  301425 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.923803  301425 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.923923  301425 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.923976  301425 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.079335  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.095210  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.096664  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.109172  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.111720  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.114386  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.200545  301425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 13:38:23.200629  301425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.200698  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.203884  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 13:38:23.261424  301425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 13:38:23.261500  301425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.261528  301425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 13:38:23.261561  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.261569  301425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.261610  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.267971  301425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 13:38:23.268018  301425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.268075  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317322  301425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 13:38:23.317369  301425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.317387  301425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 13:38:23.317422  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317441  301425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.317440  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.317489  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317507  301425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 13:38:23.317530  301425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 13:38:23.317551  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.317588  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.317553  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317683  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.322770  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.432764  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 13:38:23.432833  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 13:38:23.432877  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.442661  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 13:38:23.442741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 13:38:23.442785  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 13:38:23.442825  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 13:38:23.481401  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 13:38:23.484727  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 13:38:24.057020  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:24.203622  301425 cache_images.go:92] duration metric: took 1.282232497s to LoadCachedImages
	W0729 13:38:24.203724  301425 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 13:38:24.203742  301425 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.20.0 crio true true} ...
	I0729 13:38:24.203883  301425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-924039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:24.203996  301425 ssh_runner.go:195] Run: crio config
	I0729 13:38:24.274480  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:38:24.274531  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:24.274547  301425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:24.274582  301425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-924039 NodeName:old-k8s-version-924039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 13:38:24.274784  301425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-924039"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:24.274863  301425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 13:38:24.285241  301425 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:24.285333  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:24.294677  301425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0729 13:38:24.311572  301425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:24.328768  301425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 13:38:24.346849  301425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:24.351047  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:24.364302  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:24.502947  301425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:24.524583  301425 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039 for IP: 192.168.39.227
	I0729 13:38:24.524610  301425 certs.go:194] generating shared ca certs ...
	I0729 13:38:24.524626  301425 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:24.524831  301425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:24.524889  301425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:24.524908  301425 certs.go:256] generating profile certs ...
	I0729 13:38:24.525030  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.key
	I0729 13:38:24.525090  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b
	I0729 13:38:24.525143  301425 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key
	I0729 13:38:24.525300  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:24.525345  301425 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:24.525359  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:24.525390  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:24.525416  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:24.525440  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:24.525495  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:24.526416  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:24.593901  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:24.641443  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:24.679927  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:24.740839  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 13:38:24.779899  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:38:24.814327  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:24.842166  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:38:24.868619  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:24.894053  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:24.921437  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:24.947676  301425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:24.966469  301425 ssh_runner.go:195] Run: openssl version
	I0729 13:38:24.972780  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:24.985676  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990293  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990356  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.996523  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:25.007631  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:25.018369  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022779  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022840  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.028471  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:25.039307  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:25.050190  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054731  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054799  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.060568  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:25.071531  301425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:25.076195  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:25.082194  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:25.088573  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:25.095625  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:25.101900  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:25.107797  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:25.113775  301425 kubeadm.go:392] StartCluster: {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:25.113903  301425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:25.113975  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.159804  301425 cri.go:89] found id: ""
	I0729 13:38:25.159887  301425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:25.172248  301425 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:25.172271  301425 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:25.172321  301425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:25.182852  301425 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:25.184249  301425 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:25.186246  301425 kubeconfig.go:62] /home/jenkins/minikube-integration/19341-233093/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-924039" cluster setting kubeconfig missing "old-k8s-version-924039" context setting]
	I0729 13:38:25.188334  301425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:25.262355  301425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:25.274019  301425 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0729 13:38:25.274063  301425 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:25.274078  301425 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:25.274141  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.311295  301425 cri.go:89] found id: ""
	I0729 13:38:25.311365  301425 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:25.330380  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:25.343607  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:25.343651  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:25.343709  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:25.356979  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:25.357048  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:25.370453  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:25.386234  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:25.386308  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:25.403905  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.413906  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:25.414011  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.431532  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:25.448250  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:25.448325  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:25.459773  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:25.469841  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:25.584845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:22.667857  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:24.668022  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.748882  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:22.749346  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:22.749368  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:22.749292  302586 retry.go:31] will retry after 1.460248931s: waiting for machine to come up
	I0729 13:38:24.212019  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:24.212538  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:24.212567  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:24.212479  302586 retry.go:31] will retry after 1.462429402s: waiting for machine to come up
	I0729 13:38:25.676972  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:25.677407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:25.677429  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:25.677368  302586 retry.go:31] will retry after 2.551129627s: waiting for machine to come up
	I0729 13:38:23.826435  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:25.826981  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.325176  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:26.367294  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.618571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.775377  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.860948  301425 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:26.861038  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.361227  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.362003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.861172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.361165  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.861469  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.361306  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.861442  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.167961  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:29.667405  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.230763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:28.231276  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:28.231299  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:28.231239  302586 retry.go:31] will retry after 2.333059097s: waiting for machine to come up
	I0729 13:38:30.566386  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:30.566786  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:30.566815  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:30.566733  302586 retry.go:31] will retry after 3.717362174s: waiting for machine to come up
	I0729 13:38:30.326143  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:32.825635  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:31.361866  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:31.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.361776  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.862004  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.361883  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.862010  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.362013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.861958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.361390  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.861465  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.165082  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.165674  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.165885  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.288242  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288935  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has current primary IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288968  300705 main.go:141] libmachine: (embed-certs-135920) Found IP for machine: 192.168.72.207
	I0729 13:38:34.288987  300705 main.go:141] libmachine: (embed-certs-135920) Reserving static IP address...
	I0729 13:38:34.289557  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.289586  300705 main.go:141] libmachine: (embed-certs-135920) Reserved static IP address: 192.168.72.207
	I0729 13:38:34.289604  300705 main.go:141] libmachine: (embed-certs-135920) DBG | skip adding static IP to network mk-embed-certs-135920 - found existing host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"}
	I0729 13:38:34.289619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Getting to WaitForSSH function...
	I0729 13:38:34.289635  300705 main.go:141] libmachine: (embed-certs-135920) Waiting for SSH to be available...
	I0729 13:38:34.291951  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292308  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.292340  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292589  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH client type: external
	I0729 13:38:34.292619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa (-rw-------)
	I0729 13:38:34.292651  300705 main.go:141] libmachine: (embed-certs-135920) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:34.292665  300705 main.go:141] libmachine: (embed-certs-135920) DBG | About to run SSH command:
	I0729 13:38:34.292677  300705 main.go:141] libmachine: (embed-certs-135920) DBG | exit 0
	I0729 13:38:34.417738  300705 main.go:141] libmachine: (embed-certs-135920) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:34.418128  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetConfigRaw
	I0729 13:38:34.418881  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.421524  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.421875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.421911  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.422113  300705 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/config.json ...
	I0729 13:38:34.422306  300705 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:34.422325  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:34.422544  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.424658  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.425073  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425167  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.425365  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425575  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425786  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.425935  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.426155  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.426172  300705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:34.529324  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:34.529354  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529600  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:38:34.529625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.532564  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.532966  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.533001  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.533274  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.533502  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533701  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533906  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.534116  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.534339  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.534353  300705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-135920 && echo "embed-certs-135920" | sudo tee /etc/hostname
	I0729 13:38:34.651175  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-135920
	
	I0729 13:38:34.651203  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.653763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.654085  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654266  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.654460  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654647  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654838  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.655024  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.655230  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.655246  300705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-135920' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-135920/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-135920' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:34.769548  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:34.769579  300705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:34.769597  300705 buildroot.go:174] setting up certificates
	I0729 13:38:34.769605  300705 provision.go:84] configureAuth start
	I0729 13:38:34.769613  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.769910  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.772513  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.772833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.772859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.773005  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.775133  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775480  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.775506  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775607  300705 provision.go:143] copyHostCerts
	I0729 13:38:34.775671  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:34.775681  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:34.775738  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:34.775828  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:34.775836  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:34.775855  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:34.775909  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:34.775916  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:34.775932  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:34.775981  300705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.embed-certs-135920 san=[127.0.0.1 192.168.72.207 embed-certs-135920 localhost minikube]
	I0729 13:38:34.901161  300705 provision.go:177] copyRemoteCerts
	I0729 13:38:34.901230  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:34.901258  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.903730  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.904060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904245  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.904428  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.904606  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.904726  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:34.986647  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:35.010406  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:38:35.033884  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:35.057289  300705 provision.go:87] duration metric: took 287.670762ms to configureAuth
	I0729 13:38:35.057318  300705 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:35.057521  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:35.057621  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.060303  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060634  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.060667  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060840  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.061053  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061259  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061433  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.061599  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.061775  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.061792  300705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:35.344890  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:35.344923  300705 machine.go:97] duration metric: took 922.603779ms to provisionDockerMachine
	I0729 13:38:35.344936  300705 start.go:293] postStartSetup for "embed-certs-135920" (driver="kvm2")
	I0729 13:38:35.344947  300705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:35.344964  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.345304  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:35.345341  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.348029  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348420  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.348458  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348612  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.348832  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.348981  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.349112  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.431975  300705 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:35.436416  300705 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:35.436441  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:35.436522  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:35.436621  300705 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:35.436767  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:35.446166  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:35.473466  300705 start.go:296] duration metric: took 128.511199ms for postStartSetup
	I0729 13:38:35.473513  300705 fix.go:56] duration metric: took 18.867373858s for fixHost
	I0729 13:38:35.473540  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.476118  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476477  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.476504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476672  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.476877  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477093  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477241  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.477468  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.477642  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.477652  300705 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:35.577853  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260315.546644144
	
	I0729 13:38:35.577882  300705 fix.go:216] guest clock: 1722260315.546644144
	I0729 13:38:35.577892  300705 fix.go:229] Guest: 2024-07-29 13:38:35.546644144 +0000 UTC Remote: 2024-07-29 13:38:35.473518121 +0000 UTC m=+357.868969453 (delta=73.126023ms)
	I0729 13:38:35.577919  300705 fix.go:200] guest clock delta is within tolerance: 73.126023ms
	I0729 13:38:35.577926  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 18.971820448s
	I0729 13:38:35.577950  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.578260  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:35.581109  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581474  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.581507  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581707  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582287  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582451  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582562  300705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:35.582616  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.582645  300705 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:35.582673  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.585527  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585555  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585989  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586021  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586062  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586084  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586171  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586351  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586360  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586573  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586582  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586795  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586838  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.586942  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.686359  300705 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:35.692726  300705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:35.838487  300705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:35.844313  300705 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:35.844416  300705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:35.861079  300705 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:35.861103  300705 start.go:495] detecting cgroup driver to use...
	I0729 13:38:35.861178  300705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:35.880678  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:35.897996  300705 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:35.898070  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:35.915337  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:35.930990  300705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:36.039923  300705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:36.198255  300705 docker.go:233] disabling docker service ...
	I0729 13:38:36.198340  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:36.213373  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:36.227364  300705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:36.351279  300705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:36.468325  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:36.483692  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:36.503872  300705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:38:36.503945  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.515397  300705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:36.515502  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.527170  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.538668  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.550013  300705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:36.561402  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.573747  300705 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.594158  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.606047  300705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:36.616858  300705 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:36.616961  300705 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:36.633281  300705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:36.644423  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:36.779934  300705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:36.924394  300705 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:36.924483  300705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:36.929889  300705 start.go:563] Will wait 60s for crictl version
	I0729 13:38:36.929935  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:38:36.933671  300705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:36.973428  300705 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:36.973506  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.002245  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.034982  300705 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:38:37.036162  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:37.039092  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:37.039533  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039697  300705 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:37.044028  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:37.057278  300705 kubeadm.go:883] updating cluster {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:37.057398  300705 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:38:37.057504  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:37.096111  300705 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:38:37.096205  300705 ssh_runner.go:195] Run: which lz4
	I0729 13:38:37.100600  300705 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:37.104942  300705 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:37.104974  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:38:35.325849  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:37.326770  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.362042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:36.862022  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.361208  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.862020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.362115  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.861360  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.362077  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.861478  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.361278  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.861920  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.167072  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:40.667067  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:38.548671  300705 crio.go:462] duration metric: took 1.448103052s to copy over tarball
	I0729 13:38:38.548764  300705 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:40.801144  300705 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.252337742s)
	I0729 13:38:40.801177  300705 crio.go:469] duration metric: took 2.252468783s to extract the tarball
	I0729 13:38:40.801185  300705 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:40.840132  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:40.887424  300705 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:38:40.887447  300705 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:38:40.887456  300705 kubeadm.go:934] updating node { 192.168.72.207 8443 v1.30.3 crio true true} ...
	I0729 13:38:40.887583  300705 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-135920 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:40.887661  300705 ssh_runner.go:195] Run: crio config
	I0729 13:38:40.943732  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:40.943759  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:40.943771  300705 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:40.943801  300705 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.207 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-135920 NodeName:embed-certs-135920 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:38:40.943967  300705 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-135920"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:40.944048  300705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:38:40.954284  300705 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:40.954354  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:40.963877  300705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 13:38:40.981828  300705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:40.999273  300705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 13:38:41.016590  300705 ssh_runner.go:195] Run: grep 192.168.72.207	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:41.020149  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:41.031970  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:41.163779  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:41.181723  300705 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920 for IP: 192.168.72.207
	I0729 13:38:41.181746  300705 certs.go:194] generating shared ca certs ...
	I0729 13:38:41.181764  300705 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:41.181989  300705 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:41.182053  300705 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:41.182067  300705 certs.go:256] generating profile certs ...
	I0729 13:38:41.182191  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/client.key
	I0729 13:38:41.182257  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key.45ab1b35
	I0729 13:38:41.182306  300705 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key
	I0729 13:38:41.182454  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:41.182501  300705 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:41.182517  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:41.182553  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:41.182583  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:41.182607  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:41.182647  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:41.183522  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:41.239170  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:41.278086  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:41.318584  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:41.351639  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 13:38:41.389242  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:38:41.414897  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:41.439178  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:38:41.464278  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:41.488391  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:41.515271  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:41.539904  300705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:41.557036  300705 ssh_runner.go:195] Run: openssl version
	I0729 13:38:41.562935  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:41.580782  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585603  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585670  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.591504  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:41.602129  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:41.612441  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616813  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616866  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.622328  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:41.633108  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:41.643897  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648369  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648415  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.654085  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:41.665037  300705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:41.670067  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:41.676340  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:41.682386  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:41.688809  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:41.694957  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:41.700469  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:41.706471  300705 kubeadm.go:392] StartCluster: {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:41.706561  300705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:41.706617  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.746623  300705 cri.go:89] found id: ""
	I0729 13:38:41.746703  300705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:41.757101  300705 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:41.757121  300705 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:41.757174  300705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:41.766817  300705 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:41.767837  300705 kubeconfig.go:125] found "embed-certs-135920" server: "https://192.168.72.207:8443"
	I0729 13:38:41.770191  300705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:41.779930  300705 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.207
	I0729 13:38:41.779961  300705 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:41.779976  300705 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:41.780030  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.816273  300705 cri.go:89] found id: ""
	I0729 13:38:41.816350  300705 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:41.836512  300705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:41.847230  300705 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:41.847249  300705 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:41.847297  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:41.856215  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:41.856262  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:41.866646  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:41.876656  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:41.876723  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:41.886810  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.895693  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:41.895755  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.904774  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:41.915232  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:41.915301  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:41.924961  300705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:41.937051  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:42.059359  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:39.329415  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.826891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.361613  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:41.861155  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.361524  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.862047  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.361778  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.862055  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.861737  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.361194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.862019  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.326814  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:45.666203  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:42.934386  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.142119  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.221754  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.346345  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:43.346451  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.847275  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.347551  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.391680  300705 api_server.go:72] duration metric: took 1.045336573s to wait for apiserver process to appear ...
	I0729 13:38:44.391709  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:44.391735  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:44.392354  300705 api_server.go:269] stopped: https://192.168.72.207:8443/healthz: Get "https://192.168.72.207:8443/healthz": dial tcp 192.168.72.207:8443: connect: connection refused
	I0729 13:38:44.892773  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.149059  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.149101  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.149128  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.161645  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.161672  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.391878  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.396499  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.396527  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:47.892015  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.897406  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.897436  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:48.391867  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:48.395941  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:38:48.401926  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:48.401951  300705 api_server.go:131] duration metric: took 4.010234721s to wait for apiserver health ...
	I0729 13:38:48.401962  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:48.401970  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:48.403912  300705 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:44.073092  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:46.327011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:48.405332  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:48.416550  300705 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:48.439881  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:48.452435  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:48.452477  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:48.452527  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:48.452544  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:48.452556  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:48.452575  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:48.452584  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:48.452594  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:48.452604  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:48.452617  300705 system_pods.go:74] duration metric: took 12.710662ms to wait for pod list to return data ...
	I0729 13:38:48.452629  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:48.455453  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:48.455484  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:48.455497  300705 node_conditions.go:105] duration metric: took 2.858433ms to run NodePressure ...
	I0729 13:38:48.455518  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:48.791507  300705 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796191  300705 kubeadm.go:739] kubelet initialised
	I0729 13:38:48.796213  300705 kubeadm.go:740] duration metric: took 4.674843ms waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796222  300705 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:48.802395  300705 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.807224  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807247  300705 pod_ready.go:81] duration metric: took 4.825485ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.807263  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807269  300705 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.812485  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812516  300705 pod_ready.go:81] duration metric: took 5.235923ms for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.812529  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812536  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.817345  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817374  300705 pod_ready.go:81] duration metric: took 4.827847ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.817383  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817390  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.843709  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843754  300705 pod_ready.go:81] duration metric: took 26.35618ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.843775  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843783  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.243226  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243257  300705 pod_ready.go:81] duration metric: took 399.464753ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.243269  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243278  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.643370  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643399  300705 pod_ready.go:81] duration metric: took 400.112533ms for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.643410  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643416  300705 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:50.044089  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044119  300705 pod_ready.go:81] duration metric: took 400.694081ms for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:50.044128  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044135  300705 pod_ready.go:38] duration metric: took 1.247904039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:50.044153  300705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:50.055730  300705 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:50.055755  300705 kubeadm.go:597] duration metric: took 8.298625813s to restartPrimaryControlPlane
	I0729 13:38:50.055765  300705 kubeadm.go:394] duration metric: took 8.349303256s to StartCluster
	I0729 13:38:50.055785  300705 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.055869  300705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:50.057734  300705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.058013  300705 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:50.058092  300705 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:50.058165  300705 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-135920"
	I0729 13:38:50.058216  300705 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-135920"
	W0729 13:38:50.058230  300705 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:50.058217  300705 addons.go:69] Setting default-storageclass=true in profile "embed-certs-135920"
	I0729 13:38:50.058244  300705 addons.go:69] Setting metrics-server=true in profile "embed-certs-135920"
	I0729 13:38:50.058268  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058270  300705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-135920"
	I0729 13:38:50.058297  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:50.058305  300705 addons.go:234] Setting addon metrics-server=true in "embed-certs-135920"
	W0729 13:38:50.058350  300705 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:50.058416  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058719  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058746  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058763  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058766  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058732  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058835  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.061029  300705 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:50.062610  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:50.074642  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0729 13:38:50.074661  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0729 13:38:50.075119  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0729 13:38:50.075217  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075310  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075570  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075833  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.075856  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076049  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076066  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076273  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076367  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076393  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076434  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076620  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.076863  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.076912  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.076959  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.077488  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.077519  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.080392  300705 addons.go:234] Setting addon default-storageclass=true in "embed-certs-135920"
	W0729 13:38:50.080419  300705 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:50.080458  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.080872  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.080914  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.093352  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I0729 13:38:50.093981  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.094704  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.094742  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.095201  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.095452  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.095863  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0729 13:38:50.096287  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096506  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0729 13:38:50.096945  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096974  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.096991  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.097343  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.097408  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.097508  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.097529  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.099585  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.099600  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.099936  300705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:50.100730  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.100765  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.101377  300705 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.101399  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:50.101424  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.101563  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.103218  300705 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:46.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:46.862046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.362045  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.361183  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.862026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.361204  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.861490  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.361635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.861519  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.104927  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:50.104948  300705 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:50.104971  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.105309  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106036  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.106207  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106369  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.106615  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.106716  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.106817  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.108316  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.108859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108908  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.109081  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.109240  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.109354  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.119251  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0729 13:38:50.119703  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.120206  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.120235  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.120620  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.120813  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.122685  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.122898  300705 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.122910  300705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:50.122923  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.125412  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.125875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.125914  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.126140  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.126321  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.126448  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.126566  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.254664  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:50.276352  300705 node_ready.go:35] waiting up to 6m0s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:50.328315  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.412968  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.459653  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:50.459697  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:50.513203  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:50.513237  300705 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:50.576439  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.576469  300705 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:50.611994  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.701214  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701569  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.701636  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701647  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701657  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701663  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701909  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701936  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701939  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.707113  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.707130  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.707390  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.707407  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.707407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.625719  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212712139s)
	I0729 13:38:51.625766  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.625778  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626066  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.626109  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626117  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.626135  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.626143  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626412  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626430  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662030  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.049982518s)
	I0729 13:38:51.662094  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662110  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.662391  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.662759  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.662781  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662798  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.663076  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.663117  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.663126  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.663138  300705 addons.go:475] Verifying addon metrics-server=true in "embed-certs-135920"
	I0729 13:38:51.666005  300705 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 13:38:47.666568  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.167349  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.667365  300705 addons.go:510] duration metric: took 1.609276005s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 13:38:52.280219  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.826113  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.826826  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:53.327720  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:51.861510  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.362026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.861182  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.361850  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.861931  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.362035  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.861192  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.361173  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.862018  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.665875  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.666184  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.779805  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:56.780550  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:55.826349  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:58.326186  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:56.361740  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:56.862033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.362084  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.861406  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.861194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.361788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.861962  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.362043  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.862000  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.166551  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:59.167246  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.666773  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:57.780677  300705 node_ready.go:49] node "embed-certs-135920" has status "Ready":"True"
	I0729 13:38:57.780700  300705 node_ready.go:38] duration metric: took 7.504317897s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:57.780709  300705 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:57.786299  300705 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791107  300705 pod_ready.go:92] pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:57.791132  300705 pod_ready.go:81] duration metric: took 4.805712ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791143  300705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:59.806437  300705 pod_ready.go:102] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:00.296725  300705 pod_ready.go:92] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.296772  300705 pod_ready.go:81] duration metric: took 2.505622037s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.296782  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302450  300705 pod_ready.go:92] pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.302471  300705 pod_ready.go:81] duration metric: took 5.680644ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302482  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306734  300705 pod_ready.go:92] pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.306753  300705 pod_ready.go:81] duration metric: took 4.264085ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306762  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311745  300705 pod_ready.go:92] pod "kube-proxy-sn8bc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.311763  300705 pod_ready.go:81] duration metric: took 4.990061ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311773  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817465  300705 pod_ready.go:92] pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:01.817489  300705 pod_ready.go:81] duration metric: took 1.50570948s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817499  300705 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.825911  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.325485  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.362213  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:01.861107  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.361767  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.861151  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.361607  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.862013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.362032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.861858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.361611  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.862037  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.667047  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.166825  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.826817  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.326374  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:05.325891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:07.326167  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.362002  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:06.861635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.361659  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.862061  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.862083  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.361356  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.861763  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.361420  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.861822  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.666165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:10.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:08.824692  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.324207  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:09.326609  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.826082  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.362046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:11.861909  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.861834  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.361461  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.861666  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.861830  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.361141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.862003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.167800  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.665790  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:13.325286  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.826111  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:14.327217  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.826625  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.361731  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:16.862014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.361702  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.862141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.361808  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.361104  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.861123  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.361276  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.861176  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.666780  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.165629  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:18.328096  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.824426  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:19.326628  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.825705  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.362052  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:21.861150  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.361802  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.861996  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.362106  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.861135  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.361998  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.862048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.361848  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.861813  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.666434  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.666549  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:22.824988  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.825210  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.825579  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:23.826380  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:25.826544  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:27.826988  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:26.861651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:26.861733  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:26.904275  301425 cri.go:89] found id: ""
	I0729 13:39:26.904307  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.904315  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:26.904322  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:26.904387  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:26.946925  301425 cri.go:89] found id: ""
	I0729 13:39:26.946954  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.946966  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:26.946973  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:26.947036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:26.979236  301425 cri.go:89] found id: ""
	I0729 13:39:26.979267  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.979276  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:26.979282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:26.979330  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:27.022185  301425 cri.go:89] found id: ""
	I0729 13:39:27.022212  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.022220  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:27.022226  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:27.022277  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:27.055228  301425 cri.go:89] found id: ""
	I0729 13:39:27.055256  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.055266  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:27.055274  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:27.055335  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:27.088885  301425 cri.go:89] found id: ""
	I0729 13:39:27.088918  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.088926  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:27.088933  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:27.088986  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:27.123861  301425 cri.go:89] found id: ""
	I0729 13:39:27.123893  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.123902  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:27.123915  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:27.123967  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:27.157921  301425 cri.go:89] found id: ""
	I0729 13:39:27.157956  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.157964  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:27.157988  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:27.158003  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.222447  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:27.222489  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:27.265646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:27.265680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:27.317344  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:27.317388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:27.333664  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:27.333689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:27.460502  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:29.960703  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:29.974159  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:29.974235  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:30.009701  301425 cri.go:89] found id: ""
	I0729 13:39:30.009740  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.009753  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:30.009761  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:30.009822  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:30.045806  301425 cri.go:89] found id: ""
	I0729 13:39:30.045841  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.045853  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:30.045860  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:30.045924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:30.078709  301425 cri.go:89] found id: ""
	I0729 13:39:30.078738  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.078747  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:30.078753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:30.078808  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:30.112884  301425 cri.go:89] found id: ""
	I0729 13:39:30.112920  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.112932  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:30.112943  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:30.113012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:30.148160  301425 cri.go:89] found id: ""
	I0729 13:39:30.148196  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.148208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:30.148217  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:30.148285  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:30.186939  301425 cri.go:89] found id: ""
	I0729 13:39:30.186967  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.186975  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:30.186981  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:30.187039  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:30.241888  301425 cri.go:89] found id: ""
	I0729 13:39:30.241915  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.241926  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:30.241934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:30.242009  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:30.281482  301425 cri.go:89] found id: ""
	I0729 13:39:30.281510  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.281518  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:30.281527  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:30.281540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:30.321688  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:30.321730  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:30.378464  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:30.378508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:30.394109  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:30.394150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:30.474077  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:30.474101  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:30.474118  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.166322  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.166623  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.666142  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.323534  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.324750  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:30.327219  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:32.826011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.046016  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:33.059705  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:33.059795  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:33.096521  301425 cri.go:89] found id: ""
	I0729 13:39:33.096549  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.096557  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:33.096564  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:33.096621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:33.131262  301425 cri.go:89] found id: ""
	I0729 13:39:33.131295  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.131307  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:33.131314  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:33.131378  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:33.168889  301425 cri.go:89] found id: ""
	I0729 13:39:33.168915  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.168925  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:33.168932  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:33.168994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:33.205513  301425 cri.go:89] found id: ""
	I0729 13:39:33.205547  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.205558  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:33.205567  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:33.205644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:33.247051  301425 cri.go:89] found id: ""
	I0729 13:39:33.247079  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.247087  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:33.247093  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:33.247149  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:33.279541  301425 cri.go:89] found id: ""
	I0729 13:39:33.279575  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.279587  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:33.279596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:33.279659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:33.314000  301425 cri.go:89] found id: ""
	I0729 13:39:33.314034  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.314046  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:33.314054  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:33.314117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:33.351363  301425 cri.go:89] found id: ""
	I0729 13:39:33.351390  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.351401  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:33.351412  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:33.351437  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:33.413509  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:33.413547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:33.428128  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:33.428165  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:33.495430  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:33.495461  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:33.495478  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:33.574060  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:33.574098  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:34.166133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.167919  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.823668  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.824684  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.326216  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826516  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.113561  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:36.126899  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:36.126965  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:36.163363  301425 cri.go:89] found id: ""
	I0729 13:39:36.163396  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.163407  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:36.163414  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:36.163473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:36.205215  301425 cri.go:89] found id: ""
	I0729 13:39:36.205243  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.205259  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:36.205267  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:36.205331  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:36.243166  301425 cri.go:89] found id: ""
	I0729 13:39:36.243220  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.243231  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:36.243239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:36.243295  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:36.280804  301425 cri.go:89] found id: ""
	I0729 13:39:36.280836  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.280845  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:36.280852  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:36.280903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:36.317291  301425 cri.go:89] found id: ""
	I0729 13:39:36.317320  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.317330  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:36.317337  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:36.317399  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:36.358111  301425 cri.go:89] found id: ""
	I0729 13:39:36.358145  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.358156  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:36.358164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:36.358229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:36.399407  301425 cri.go:89] found id: ""
	I0729 13:39:36.399440  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.399451  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:36.399459  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:36.399525  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:36.437876  301425 cri.go:89] found id: ""
	I0729 13:39:36.437904  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.437914  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:36.437926  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:36.437942  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:36.514464  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:36.514493  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:36.514511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:36.592036  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:36.592083  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:36.647650  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:36.647691  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:36.706890  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:36.706935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.226070  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:39.239313  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:39.239373  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:39.274158  301425 cri.go:89] found id: ""
	I0729 13:39:39.274191  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.274202  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:39.274210  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:39.274286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:39.308448  301425 cri.go:89] found id: ""
	I0729 13:39:39.308484  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.308492  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:39.308499  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:39.308563  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:39.347745  301425 cri.go:89] found id: ""
	I0729 13:39:39.347782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.347791  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:39.347798  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:39.347856  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:39.380649  301425 cri.go:89] found id: ""
	I0729 13:39:39.380679  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.380688  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:39.380696  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:39.380767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:39.415076  301425 cri.go:89] found id: ""
	I0729 13:39:39.415107  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.415115  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:39.415120  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:39.415170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:39.450749  301425 cri.go:89] found id: ""
	I0729 13:39:39.450782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.450793  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:39.450801  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:39.450864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:39.482148  301425 cri.go:89] found id: ""
	I0729 13:39:39.482175  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.482184  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:39.482190  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:39.482239  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:39.518558  301425 cri.go:89] found id: ""
	I0729 13:39:39.518588  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.518597  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:39.518608  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:39.518622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:39.555753  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:39.555786  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:39.606627  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:39.606661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.620359  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:39.620388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:39.690685  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:39.690711  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:39.690728  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:38.665446  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.666445  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826801  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.325166  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:39.827390  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.326038  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.271925  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:42.284365  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:42.284447  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:42.318966  301425 cri.go:89] found id: ""
	I0729 13:39:42.318998  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.319020  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:42.319028  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:42.319111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:42.354811  301425 cri.go:89] found id: ""
	I0729 13:39:42.354840  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.354854  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:42.354862  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:42.354917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:42.402524  301425 cri.go:89] found id: ""
	I0729 13:39:42.402557  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.402569  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:42.402577  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:42.402643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:42.460954  301425 cri.go:89] found id: ""
	I0729 13:39:42.460984  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.461001  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:42.461010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:42.461063  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:42.516849  301425 cri.go:89] found id: ""
	I0729 13:39:42.516880  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.516890  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:42.516898  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:42.516963  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:42.560289  301425 cri.go:89] found id: ""
	I0729 13:39:42.560316  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.560325  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:42.560332  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:42.560397  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:42.597798  301425 cri.go:89] found id: ""
	I0729 13:39:42.597829  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.597839  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:42.597847  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:42.597912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:42.633015  301425 cri.go:89] found id: ""
	I0729 13:39:42.633043  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.633059  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:42.633068  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:42.633080  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:42.711103  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:42.711126  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:42.711141  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:42.787459  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:42.787499  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:42.828965  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:42.829002  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:42.881702  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:42.881740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:45.396462  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:45.410766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:45.410859  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:45.445886  301425 cri.go:89] found id: ""
	I0729 13:39:45.445931  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.445943  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:45.445960  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:45.446023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:45.484293  301425 cri.go:89] found id: ""
	I0729 13:39:45.484326  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.484338  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:45.484346  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:45.484410  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:45.520209  301425 cri.go:89] found id: ""
	I0729 13:39:45.520237  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.520246  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:45.520252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:45.520300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:45.555671  301425 cri.go:89] found id: ""
	I0729 13:39:45.555702  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.555711  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:45.555717  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:45.555767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:45.594578  301425 cri.go:89] found id: ""
	I0729 13:39:45.594609  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.594618  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:45.594624  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:45.594685  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:45.631777  301425 cri.go:89] found id: ""
	I0729 13:39:45.631805  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.631817  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:45.631825  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:45.631881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:45.667163  301425 cri.go:89] found id: ""
	I0729 13:39:45.667189  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.667197  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:45.667203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:45.667258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:45.703393  301425 cri.go:89] found id: ""
	I0729 13:39:45.703434  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.703443  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:45.703454  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:45.703488  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:45.774424  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:45.774452  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:45.774472  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:45.857529  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:45.857586  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:45.899737  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:45.899775  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:45.952640  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:45.952685  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:42.666728  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.165982  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.825543  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.323544  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:47.323595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:44.825237  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:46.825276  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.467705  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:48.482292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:48.482380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:48.520146  301425 cri.go:89] found id: ""
	I0729 13:39:48.520181  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.520195  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:48.520204  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:48.520282  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:48.552623  301425 cri.go:89] found id: ""
	I0729 13:39:48.552654  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.552665  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:48.552672  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:48.552734  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:48.587254  301425 cri.go:89] found id: ""
	I0729 13:39:48.587290  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.587303  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:48.587309  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:48.587368  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:48.621045  301425 cri.go:89] found id: ""
	I0729 13:39:48.621076  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.621088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:48.621096  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:48.621160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:48.654117  301425 cri.go:89] found id: ""
	I0729 13:39:48.654151  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.654163  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:48.654171  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:48.654236  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:48.693108  301425 cri.go:89] found id: ""
	I0729 13:39:48.693149  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.693166  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:48.693173  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:48.693225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:48.733000  301425 cri.go:89] found id: ""
	I0729 13:39:48.733025  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.733033  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:48.733039  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:48.733088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:48.773761  301425 cri.go:89] found id: ""
	I0729 13:39:48.773789  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.773798  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:48.773807  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:48.773822  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:48.826655  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:48.826683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:48.840335  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:48.840364  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:48.913727  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:48.913754  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:48.913774  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:48.990196  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:48.990235  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:47.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.167105  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.667165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.324027  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.324146  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.825859  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.326299  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.533333  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:51.547115  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:51.547175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:51.583247  301425 cri.go:89] found id: ""
	I0729 13:39:51.583284  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.583292  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:51.583297  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:51.583350  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:51.618925  301425 cri.go:89] found id: ""
	I0729 13:39:51.618958  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.618969  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:51.618977  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:51.619036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:51.657099  301425 cri.go:89] found id: ""
	I0729 13:39:51.657132  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.657144  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:51.657151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:51.657210  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:51.695413  301425 cri.go:89] found id: ""
	I0729 13:39:51.695459  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.695471  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:51.695480  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:51.695553  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:51.731153  301425 cri.go:89] found id: ""
	I0729 13:39:51.731186  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.731198  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:51.731206  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:51.731271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:51.765662  301425 cri.go:89] found id: ""
	I0729 13:39:51.765716  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.765730  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:51.765740  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:51.765807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:51.800442  301425 cri.go:89] found id: ""
	I0729 13:39:51.800480  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.800491  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:51.800500  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:51.800562  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:51.844516  301425 cri.go:89] found id: ""
	I0729 13:39:51.844542  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.844551  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:51.844562  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:51.844580  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:51.896139  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:51.896176  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:51.910479  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:51.910511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:51.980025  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:51.980052  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:51.980071  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:52.054674  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:52.054717  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.596468  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:54.612233  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:54.612344  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:54.653506  301425 cri.go:89] found id: ""
	I0729 13:39:54.653547  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.653558  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:54.653565  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:54.653624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:54.696964  301425 cri.go:89] found id: ""
	I0729 13:39:54.697002  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.697015  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:54.697023  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:54.697088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:54.731165  301425 cri.go:89] found id: ""
	I0729 13:39:54.731196  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.731207  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:54.731214  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:54.731279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:54.774397  301425 cri.go:89] found id: ""
	I0729 13:39:54.774426  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.774437  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:54.774444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:54.774506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:54.813365  301425 cri.go:89] found id: ""
	I0729 13:39:54.813396  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.813408  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:54.813414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:54.813480  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:54.849936  301425 cri.go:89] found id: ""
	I0729 13:39:54.849962  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.849970  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:54.849980  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:54.850042  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:54.883979  301425 cri.go:89] found id: ""
	I0729 13:39:54.884007  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.884015  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:54.884021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:54.884087  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:54.919754  301425 cri.go:89] found id: ""
	I0729 13:39:54.919779  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.919787  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:54.919796  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:54.919817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:54.973082  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:54.973117  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:54.986534  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:54.986571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:55.055473  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:55.055499  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:55.055514  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:55.138278  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:55.138322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.166585  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:56.166714  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.824525  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.824559  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.825238  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.826464  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.826664  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.683818  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:57.698992  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:57.699070  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:57.742071  301425 cri.go:89] found id: ""
	I0729 13:39:57.742103  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.742113  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:57.742121  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:57.742185  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:57.777871  301425 cri.go:89] found id: ""
	I0729 13:39:57.777902  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.777911  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:57.777918  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:57.777975  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:57.817767  301425 cri.go:89] found id: ""
	I0729 13:39:57.817798  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.817809  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:57.817817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:57.817889  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:57.855608  301425 cri.go:89] found id: ""
	I0729 13:39:57.855634  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.855644  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:57.855651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:57.855714  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:57.891219  301425 cri.go:89] found id: ""
	I0729 13:39:57.891248  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.891258  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:57.891266  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:57.891336  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:57.926000  301425 cri.go:89] found id: ""
	I0729 13:39:57.926034  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.926045  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:57.926053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:57.926116  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:57.964935  301425 cri.go:89] found id: ""
	I0729 13:39:57.964962  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.964978  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:57.964985  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:57.965051  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:58.001363  301425 cri.go:89] found id: ""
	I0729 13:39:58.001393  301425 logs.go:276] 0 containers: []
	W0729 13:39:58.001405  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:58.001417  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:58.001434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:58.057551  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:58.057598  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:58.072162  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:58.072200  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:58.140533  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:58.140565  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:58.140582  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:58.227285  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:58.227330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:00.769075  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:00.783394  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:00.783471  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:00.831260  301425 cri.go:89] found id: ""
	I0729 13:40:00.831291  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.831301  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:00.831309  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:00.831370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:00.870017  301425 cri.go:89] found id: ""
	I0729 13:40:00.870045  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.870057  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:00.870065  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:00.870127  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:00.904691  301425 cri.go:89] found id: ""
	I0729 13:40:00.904728  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.904740  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:00.904748  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:00.904828  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:00.937221  301425 cri.go:89] found id: ""
	I0729 13:40:00.937249  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.937259  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:00.937265  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:00.937329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:58.167355  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.666837  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.824755  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.324616  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.325368  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.325689  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.326062  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.977961  301425 cri.go:89] found id: ""
	I0729 13:40:00.977991  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.978002  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:00.978010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:00.978104  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:01.014239  301425 cri.go:89] found id: ""
	I0729 13:40:01.014271  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.014283  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:01.014292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:01.014362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:01.050583  301425 cri.go:89] found id: ""
	I0729 13:40:01.050615  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.050630  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:01.050637  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:01.050696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:01.091599  301425 cri.go:89] found id: ""
	I0729 13:40:01.091624  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.091634  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:01.091643  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:01.091661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:01.146404  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:01.146445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:01.160327  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:01.160358  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:01.237120  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:01.237147  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:01.237162  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:01.321539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:01.321590  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:03.865268  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:03.879648  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:03.879724  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:03.915303  301425 cri.go:89] found id: ""
	I0729 13:40:03.915329  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.915338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:03.915344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:03.915403  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:03.951982  301425 cri.go:89] found id: ""
	I0729 13:40:03.952014  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.952023  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:03.952032  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:03.952099  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:03.989751  301425 cri.go:89] found id: ""
	I0729 13:40:03.989785  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.989796  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:03.989804  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:03.989870  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:04.026934  301425 cri.go:89] found id: ""
	I0729 13:40:04.026975  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.026988  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:04.026996  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:04.027059  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:04.064135  301425 cri.go:89] found id: ""
	I0729 13:40:04.064165  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.064175  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:04.064187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:04.064256  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:04.103080  301425 cri.go:89] found id: ""
	I0729 13:40:04.103108  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.103117  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:04.103123  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:04.103172  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:04.143370  301425 cri.go:89] found id: ""
	I0729 13:40:04.143403  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.143414  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:04.143422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:04.143491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:04.179251  301425 cri.go:89] found id: ""
	I0729 13:40:04.179286  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.179298  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:04.179311  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:04.179330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:04.261058  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:04.261089  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:04.261111  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:04.342897  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:04.342935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:04.391504  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:04.391532  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:04.443064  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:04.443106  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:03.166195  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:05.166660  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.824882  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:07.324346  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.326236  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.825685  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.959346  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:06.974377  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:06.974444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:07.007797  301425 cri.go:89] found id: ""
	I0729 13:40:07.007834  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.007847  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:07.007856  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:07.007924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:07.042707  301425 cri.go:89] found id: ""
	I0729 13:40:07.042741  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.042749  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:07.042755  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:07.042807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:07.080150  301425 cri.go:89] found id: ""
	I0729 13:40:07.080185  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.080196  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:07.080203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:07.080268  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:07.115740  301425 cri.go:89] found id: ""
	I0729 13:40:07.115777  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.115788  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:07.115796  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:07.115888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:07.154110  301425 cri.go:89] found id: ""
	I0729 13:40:07.154141  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.154151  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:07.154158  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:07.154225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:07.190819  301425 cri.go:89] found id: ""
	I0729 13:40:07.190850  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.190858  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:07.190865  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:07.190917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:07.231530  301425 cri.go:89] found id: ""
	I0729 13:40:07.231560  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.231571  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:07.231579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:07.231643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:07.272211  301425 cri.go:89] found id: ""
	I0729 13:40:07.272240  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.272247  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:07.272257  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:07.272269  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.326673  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:07.326704  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:07.341255  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:07.341282  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:07.409850  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:07.409878  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:07.409895  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:07.493105  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:07.493169  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.033906  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:10.047938  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:10.048018  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:10.084224  301425 cri.go:89] found id: ""
	I0729 13:40:10.084251  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.084259  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:10.084265  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:10.084316  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:10.120362  301425 cri.go:89] found id: ""
	I0729 13:40:10.120398  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.120409  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:10.120417  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:10.120484  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:10.154128  301425 cri.go:89] found id: ""
	I0729 13:40:10.154160  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.154170  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:10.154178  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:10.154243  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:10.189539  301425 cri.go:89] found id: ""
	I0729 13:40:10.189574  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.189588  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:10.189596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:10.189661  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:10.228821  301425 cri.go:89] found id: ""
	I0729 13:40:10.228855  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.228867  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:10.228875  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:10.228950  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:10.274726  301425 cri.go:89] found id: ""
	I0729 13:40:10.274758  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.274769  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:10.274776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:10.274845  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:10.308910  301425 cri.go:89] found id: ""
	I0729 13:40:10.308945  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.308956  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:10.308964  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:10.309030  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:10.346008  301425 cri.go:89] found id: ""
	I0729 13:40:10.346044  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.346056  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:10.346069  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:10.346091  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:10.360541  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:10.360581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:10.433763  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:10.433788  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:10.433802  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:10.520366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:10.520418  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.561482  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:10.561512  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.668816  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:10.166833  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:09.823429  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.824033  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:08.826798  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.326762  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.327128  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.114858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:13.128348  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:13.128425  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:13.165329  301425 cri.go:89] found id: ""
	I0729 13:40:13.165359  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.165370  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:13.165377  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:13.165441  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:13.200104  301425 cri.go:89] found id: ""
	I0729 13:40:13.200135  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.200148  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:13.200155  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:13.200224  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:13.238632  301425 cri.go:89] found id: ""
	I0729 13:40:13.238680  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.238688  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:13.238694  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:13.238748  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:13.270859  301425 cri.go:89] found id: ""
	I0729 13:40:13.270892  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.270901  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:13.270907  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:13.270976  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:13.308346  301425 cri.go:89] found id: ""
	I0729 13:40:13.308378  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.308386  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:13.308392  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:13.308444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:13.346286  301425 cri.go:89] found id: ""
	I0729 13:40:13.346319  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.346331  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:13.346339  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:13.346412  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:13.383699  301425 cri.go:89] found id: ""
	I0729 13:40:13.383736  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.383769  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:13.383791  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:13.383850  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:13.419958  301425 cri.go:89] found id: ""
	I0729 13:40:13.420045  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.420058  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:13.420071  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:13.420094  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:13.473984  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:13.474028  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:13.488376  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:13.488410  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:13.559515  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:13.559543  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:13.559560  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:13.640528  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:13.640570  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:12.665799  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.666662  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.668217  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.323746  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.323961  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:15.826422  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.326284  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.189581  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:16.203962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:16.204052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:16.240537  301425 cri.go:89] found id: ""
	I0729 13:40:16.240572  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.240583  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:16.240591  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:16.240659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:16.277060  301425 cri.go:89] found id: ""
	I0729 13:40:16.277099  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.277112  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:16.277123  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:16.277200  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:16.313839  301425 cri.go:89] found id: ""
	I0729 13:40:16.313869  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.313878  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:16.313884  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:16.313935  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:16.351806  301425 cri.go:89] found id: ""
	I0729 13:40:16.351840  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.351850  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:16.351858  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:16.351922  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:16.387122  301425 cri.go:89] found id: ""
	I0729 13:40:16.387158  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.387169  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:16.387176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:16.387242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:16.424180  301425 cri.go:89] found id: ""
	I0729 13:40:16.424209  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.424220  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:16.424229  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:16.424292  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:16.461827  301425 cri.go:89] found id: ""
	I0729 13:40:16.461865  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.461879  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:16.461889  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:16.461946  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:16.510198  301425 cri.go:89] found id: ""
	I0729 13:40:16.510230  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.510238  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:16.510248  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:16.510264  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:16.585378  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:16.585420  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:16.629304  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:16.629337  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:16.682386  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:16.682434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:16.698405  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:16.698436  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:16.770281  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.270551  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:19.284543  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:19.284617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:19.325194  301425 cri.go:89] found id: ""
	I0729 13:40:19.325221  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.325231  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:19.325238  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:19.325298  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:19.362007  301425 cri.go:89] found id: ""
	I0729 13:40:19.362038  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.362058  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:19.362066  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:19.362196  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:19.401162  301425 cri.go:89] found id: ""
	I0729 13:40:19.401191  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.401202  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:19.401210  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:19.401274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:19.434652  301425 cri.go:89] found id: ""
	I0729 13:40:19.434689  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.434700  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:19.434709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:19.434774  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:19.470116  301425 cri.go:89] found id: ""
	I0729 13:40:19.470149  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.470157  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:19.470164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:19.470218  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:19.503593  301425 cri.go:89] found id: ""
	I0729 13:40:19.503621  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.503629  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:19.503635  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:19.503696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:19.546127  301425 cri.go:89] found id: ""
	I0729 13:40:19.546155  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.546164  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:19.546169  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:19.546217  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:19.584600  301425 cri.go:89] found id: ""
	I0729 13:40:19.584639  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.584650  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:19.584663  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:19.584681  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:19.599411  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:19.599446  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:19.665811  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.665836  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:19.665853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:19.747295  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:19.747339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:19.790476  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:19.790516  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:18.669004  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.166437  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.824788  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.327093  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:20.825470  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.827651  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.346725  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:22.361349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:22.361443  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:22.394840  301425 cri.go:89] found id: ""
	I0729 13:40:22.394870  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.394881  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:22.394889  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:22.394956  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:22.429328  301425 cri.go:89] found id: ""
	I0729 13:40:22.429356  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.429364  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:22.429370  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:22.429431  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:22.463179  301425 cri.go:89] found id: ""
	I0729 13:40:22.463206  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.463214  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:22.463220  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:22.463291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:22.497527  301425 cri.go:89] found id: ""
	I0729 13:40:22.497557  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.497565  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:22.497571  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:22.497627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:22.537607  301425 cri.go:89] found id: ""
	I0729 13:40:22.537635  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.537646  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:22.537654  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:22.537718  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:22.580658  301425 cri.go:89] found id: ""
	I0729 13:40:22.580689  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.580701  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:22.580709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:22.580775  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:22.622229  301425 cri.go:89] found id: ""
	I0729 13:40:22.622261  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.622270  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:22.622282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:22.622346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:22.660091  301425 cri.go:89] found id: ""
	I0729 13:40:22.660120  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.660129  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:22.660139  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:22.660153  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:22.715053  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:22.715090  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:22.728865  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:22.728898  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:22.805760  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:22.805785  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:22.805799  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:22.890915  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:22.890960  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:25.457272  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:25.471002  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:25.471088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:25.506190  301425 cri.go:89] found id: ""
	I0729 13:40:25.506226  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.506237  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:25.506244  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:25.506297  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:25.540957  301425 cri.go:89] found id: ""
	I0729 13:40:25.540991  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.541002  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:25.541011  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:25.541074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:25.578378  301425 cri.go:89] found id: ""
	I0729 13:40:25.578424  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.578440  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:25.578448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:25.578518  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:25.620930  301425 cri.go:89] found id: ""
	I0729 13:40:25.620962  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.620979  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:25.620987  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:25.621056  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:25.655558  301425 cri.go:89] found id: ""
	I0729 13:40:25.655589  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.655597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:25.655604  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:25.655670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:25.688810  301425 cri.go:89] found id: ""
	I0729 13:40:25.688845  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.688855  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:25.688863  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:25.688930  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:25.724384  301425 cri.go:89] found id: ""
	I0729 13:40:25.724416  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.724428  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:25.724435  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:25.724514  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:25.763174  301425 cri.go:89] found id: ""
	I0729 13:40:25.763200  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.763209  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:25.763219  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:25.763232  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:25.818517  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:25.818569  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:25.833939  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:25.833973  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:25.910487  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:25.910515  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:25.910537  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:23.167028  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.666513  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:23.824183  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.827054  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.325894  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:27.824855  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.993887  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:25.993929  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:28.536843  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:28.550097  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:28.550175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:28.592664  301425 cri.go:89] found id: ""
	I0729 13:40:28.592697  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.592709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:28.592716  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:28.592788  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:28.638299  301425 cri.go:89] found id: ""
	I0729 13:40:28.638329  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.638337  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:28.638343  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:28.638395  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:28.682410  301425 cri.go:89] found id: ""
	I0729 13:40:28.682437  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.682446  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:28.682452  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:28.682511  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:28.719402  301425 cri.go:89] found id: ""
	I0729 13:40:28.719430  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.719438  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:28.719444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:28.719504  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:28.767515  301425 cri.go:89] found id: ""
	I0729 13:40:28.767547  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.767559  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:28.767568  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:28.767633  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:28.811600  301425 cri.go:89] found id: ""
	I0729 13:40:28.811632  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.811644  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:28.811652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:28.811727  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:28.853364  301425 cri.go:89] found id: ""
	I0729 13:40:28.853397  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.853407  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:28.853414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:28.853486  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:28.890981  301425 cri.go:89] found id: ""
	I0729 13:40:28.891013  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.891024  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:28.891035  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:28.891050  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:28.944174  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:28.944213  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:28.957724  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:28.957755  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:29.026457  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:29.026479  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:29.026497  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:29.105366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:29.105415  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:27.667251  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.166789  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:28.323476  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.324242  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:32.325477  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:29.825621  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.828363  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.649374  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:31.663432  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:31.663512  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:31.702047  301425 cri.go:89] found id: ""
	I0729 13:40:31.702080  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.702088  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:31.702098  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:31.702162  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:31.738484  301425 cri.go:89] found id: ""
	I0729 13:40:31.738510  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.738518  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:31.738524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:31.738583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:31.774214  301425 cri.go:89] found id: ""
	I0729 13:40:31.774249  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.774261  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:31.774270  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:31.774339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:31.810263  301425 cri.go:89] found id: ""
	I0729 13:40:31.810293  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.810302  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:31.810307  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:31.810369  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:31.848124  301425 cri.go:89] found id: ""
	I0729 13:40:31.848153  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.848160  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:31.848167  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:31.848234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:31.885531  301425 cri.go:89] found id: ""
	I0729 13:40:31.885561  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.885571  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:31.885580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:31.885650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:31.923904  301425 cri.go:89] found id: ""
	I0729 13:40:31.923939  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.923952  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:31.923959  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:31.924029  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:31.957165  301425 cri.go:89] found id: ""
	I0729 13:40:31.957202  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.957213  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:31.957228  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:31.957248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:32.039221  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:32.039262  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.078191  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:32.078229  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:32.131871  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:32.131922  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:32.146676  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:32.146706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:32.223849  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:34.724927  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:34.739029  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:34.739113  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:34.774627  301425 cri.go:89] found id: ""
	I0729 13:40:34.774660  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.774669  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:34.774675  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:34.774743  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:34.809840  301425 cri.go:89] found id: ""
	I0729 13:40:34.809872  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.809882  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:34.809887  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:34.809940  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:34.847530  301425 cri.go:89] found id: ""
	I0729 13:40:34.847561  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.847572  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:34.847580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:34.847648  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:34.881828  301425 cri.go:89] found id: ""
	I0729 13:40:34.881856  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.881870  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:34.881876  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:34.881937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:34.918903  301425 cri.go:89] found id: ""
	I0729 13:40:34.918937  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.918949  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:34.918956  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:34.919015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:34.954714  301425 cri.go:89] found id: ""
	I0729 13:40:34.954749  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.954761  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:34.954770  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:34.954825  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:34.993433  301425 cri.go:89] found id: ""
	I0729 13:40:34.993463  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.993472  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:34.993478  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:34.993531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:35.033830  301425 cri.go:89] found id: ""
	I0729 13:40:35.033859  301425 logs.go:276] 0 containers: []
	W0729 13:40:35.033874  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:35.033884  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:35.033900  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:35.084546  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:35.084595  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:35.098807  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:35.098845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:35.182636  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:35.182662  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:35.182674  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:35.262767  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:35.262808  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.665817  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.670805  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.823905  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.824232  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.326644  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.825977  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:37.802033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:37.815633  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:37.815697  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:37.857522  301425 cri.go:89] found id: ""
	I0729 13:40:37.857552  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.857563  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:37.857571  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:37.857627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:37.897527  301425 cri.go:89] found id: ""
	I0729 13:40:37.897564  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.897575  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:37.897583  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:37.897649  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.937135  301425 cri.go:89] found id: ""
	I0729 13:40:37.937167  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.937176  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:37.937189  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:37.937255  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:37.972699  301425 cri.go:89] found id: ""
	I0729 13:40:37.972734  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.972751  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:37.972761  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:37.972933  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:38.012702  301425 cri.go:89] found id: ""
	I0729 13:40:38.012732  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.012740  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:38.012747  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:38.012832  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:38.050228  301425 cri.go:89] found id: ""
	I0729 13:40:38.050260  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.050268  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:38.050275  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:38.050329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:38.084665  301425 cri.go:89] found id: ""
	I0729 13:40:38.084693  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.084707  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:38.084715  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:38.084780  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:38.119155  301425 cri.go:89] found id: ""
	I0729 13:40:38.119200  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.119211  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:38.119222  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:38.119236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:38.170934  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:38.170968  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:38.185298  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:38.185329  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:38.256118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:38.256149  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:38.256166  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:38.337090  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:38.337127  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:40.876177  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:40.889580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:40.889655  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:40.922971  301425 cri.go:89] found id: ""
	I0729 13:40:40.923002  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.923010  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:40.923016  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:40.923074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:40.955840  301425 cri.go:89] found id: ""
	I0729 13:40:40.955872  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.955884  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:40.955891  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:40.955952  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.165718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.166160  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.168344  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:38.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.324607  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.324996  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.344232  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:40.993258  301425 cri.go:89] found id: ""
	I0729 13:40:40.993290  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.993298  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:40.993305  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:40.993357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:41.026370  301425 cri.go:89] found id: ""
	I0729 13:40:41.026398  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.026409  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:41.026416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:41.026473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:41.060538  301425 cri.go:89] found id: ""
	I0729 13:40:41.060565  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.060574  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:41.060579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:41.060630  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:41.105074  301425 cri.go:89] found id: ""
	I0729 13:40:41.105108  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.105118  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:41.105126  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:41.105193  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:41.138254  301425 cri.go:89] found id: ""
	I0729 13:40:41.138280  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.138288  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:41.138294  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:41.138342  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:41.171432  301425 cri.go:89] found id: ""
	I0729 13:40:41.171458  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.171466  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:41.171475  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:41.171487  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:41.184703  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:41.184736  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:41.265356  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:41.265392  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:41.265409  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:41.345939  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:41.345979  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:41.388819  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:41.388852  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:43.940388  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:43.955448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:43.955515  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:43.998457  301425 cri.go:89] found id: ""
	I0729 13:40:43.998494  301425 logs.go:276] 0 containers: []
	W0729 13:40:43.998506  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:43.998515  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:43.998584  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:44.038142  301425 cri.go:89] found id: ""
	I0729 13:40:44.038173  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.038185  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:44.038193  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:44.038260  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:44.077270  301425 cri.go:89] found id: ""
	I0729 13:40:44.077302  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.077313  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:44.077321  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:44.077391  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:44.117612  301425 cri.go:89] found id: ""
	I0729 13:40:44.117641  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.117661  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:44.117681  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:44.117749  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:44.152564  301425 cri.go:89] found id: ""
	I0729 13:40:44.152603  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.152615  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:44.152623  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:44.152683  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:44.188245  301425 cri.go:89] found id: ""
	I0729 13:40:44.188276  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.188288  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:44.188296  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:44.188355  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:44.224947  301425 cri.go:89] found id: ""
	I0729 13:40:44.224975  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.224983  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:44.224989  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:44.225037  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:44.264830  301425 cri.go:89] found id: ""
	I0729 13:40:44.264860  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.264867  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:44.264877  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:44.264893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:44.343145  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:44.343182  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:44.384619  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:44.384650  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:44.438195  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:44.438237  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:44.452115  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:44.452152  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:44.526586  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:43.666987  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.167143  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.825141  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.324972  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.827065  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.325488  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:47.027726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:47.041174  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:47.041242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:47.079265  301425 cri.go:89] found id: ""
	I0729 13:40:47.079295  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.079304  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:47.079313  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:47.079380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:47.119775  301425 cri.go:89] found id: ""
	I0729 13:40:47.119807  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.119820  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:47.119828  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:47.119904  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:47.155381  301425 cri.go:89] found id: ""
	I0729 13:40:47.155415  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.155426  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:47.155434  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:47.155490  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:47.195071  301425 cri.go:89] found id: ""
	I0729 13:40:47.195103  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.195111  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:47.195117  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:47.195167  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:47.229487  301425 cri.go:89] found id: ""
	I0729 13:40:47.229519  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.229531  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:47.229539  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:47.229611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:47.266159  301425 cri.go:89] found id: ""
	I0729 13:40:47.266190  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.266201  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:47.266209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:47.266269  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:47.300813  301425 cri.go:89] found id: ""
	I0729 13:40:47.300845  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.300854  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:47.300860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:47.300916  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:47.340378  301425 cri.go:89] found id: ""
	I0729 13:40:47.340412  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.340432  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:47.340444  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:47.340464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:47.395403  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:47.395444  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:47.409505  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:47.409539  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:47.481327  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:47.481349  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:47.481365  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:47.560129  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:47.560172  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.105832  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:50.121192  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:50.121264  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:50.160217  301425 cri.go:89] found id: ""
	I0729 13:40:50.160247  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.160256  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:50.160262  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:50.160313  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:50.199952  301425 cri.go:89] found id: ""
	I0729 13:40:50.199986  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.199998  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:50.200005  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:50.200065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:50.240036  301425 cri.go:89] found id: ""
	I0729 13:40:50.240069  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.240076  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:50.240083  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:50.240134  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:50.279761  301425 cri.go:89] found id: ""
	I0729 13:40:50.279788  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.279796  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:50.279802  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:50.279852  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:50.320324  301425 cri.go:89] found id: ""
	I0729 13:40:50.320350  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.320358  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:50.320364  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:50.320423  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:50.356385  301425 cri.go:89] found id: ""
	I0729 13:40:50.356413  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.356421  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:50.356427  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:50.356482  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:50.396866  301425 cri.go:89] found id: ""
	I0729 13:40:50.396900  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.396912  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:50.396919  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:50.397008  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:50.434778  301425 cri.go:89] found id: ""
	I0729 13:40:50.434812  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.434823  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:50.434836  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:50.434853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:50.447746  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:50.447776  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:50.523750  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:50.523772  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:50.523787  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:50.604206  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:50.604255  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.647414  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:50.647449  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:48.666463  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.666670  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.823595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.824045  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.826836  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:51.326943  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.327715  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.201653  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:53.215745  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:53.215814  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:53.250482  301425 cri.go:89] found id: ""
	I0729 13:40:53.250508  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.250516  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:53.250522  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:53.250583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:53.285956  301425 cri.go:89] found id: ""
	I0729 13:40:53.285988  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.285996  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:53.286002  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:53.286055  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:53.320248  301425 cri.go:89] found id: ""
	I0729 13:40:53.320281  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.320292  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:53.320300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:53.320364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:53.355155  301425 cri.go:89] found id: ""
	I0729 13:40:53.355188  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.355200  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:53.355209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:53.355271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:53.389519  301425 cri.go:89] found id: ""
	I0729 13:40:53.389549  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.389557  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:53.389564  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:53.389620  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:53.424391  301425 cri.go:89] found id: ""
	I0729 13:40:53.424419  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.424427  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:53.424433  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:53.424492  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:53.463297  301425 cri.go:89] found id: ""
	I0729 13:40:53.463331  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.463342  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:53.463350  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:53.463433  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:53.497565  301425 cri.go:89] found id: ""
	I0729 13:40:53.497593  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.497601  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:53.497610  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:53.497622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:53.548906  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:53.548948  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:53.562789  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:53.562823  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:53.635656  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:53.635679  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:53.635693  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:53.715973  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:53.716024  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:53.166007  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.166420  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.324486  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.824480  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.825127  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.326505  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:56.258726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:56.273826  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:56.273905  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:56.310881  301425 cri.go:89] found id: ""
	I0729 13:40:56.310927  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.310936  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:56.310944  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:56.310999  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:56.350104  301425 cri.go:89] found id: ""
	I0729 13:40:56.350139  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.350151  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:56.350158  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:56.350221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:56.385100  301425 cri.go:89] found id: ""
	I0729 13:40:56.385136  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.385145  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:56.385151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:56.385234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:56.421904  301425 cri.go:89] found id: ""
	I0729 13:40:56.421941  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.421953  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:56.421961  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:56.422025  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:56.457366  301425 cri.go:89] found id: ""
	I0729 13:40:56.457403  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.457414  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:56.457422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:56.457491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:56.496700  301425 cri.go:89] found id: ""
	I0729 13:40:56.496732  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.496746  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:56.496755  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:56.496844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:56.532011  301425 cri.go:89] found id: ""
	I0729 13:40:56.532039  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.532047  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:56.532053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:56.532102  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:56.567511  301425 cri.go:89] found id: ""
	I0729 13:40:56.567543  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.567554  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:56.567566  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:56.567581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:56.615875  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:56.615914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:56.629818  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:56.629862  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:56.703255  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:56.703284  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:56.703298  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:56.786466  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:56.786508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:59.328670  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:59.342993  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:59.343061  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:59.378267  301425 cri.go:89] found id: ""
	I0729 13:40:59.378301  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.378313  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:59.378321  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:59.378392  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:59.415637  301425 cri.go:89] found id: ""
	I0729 13:40:59.415669  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.415680  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:59.415687  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:59.415759  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:59.451170  301425 cri.go:89] found id: ""
	I0729 13:40:59.451204  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.451212  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:59.451219  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:59.451275  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:59.485914  301425 cri.go:89] found id: ""
	I0729 13:40:59.485948  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.485960  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:59.485975  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:59.486052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:59.523168  301425 cri.go:89] found id: ""
	I0729 13:40:59.523198  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.523208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:59.523216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:59.523274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:59.557711  301425 cri.go:89] found id: ""
	I0729 13:40:59.557746  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.557758  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:59.557766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:59.557826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:59.593387  301425 cri.go:89] found id: ""
	I0729 13:40:59.593421  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.593434  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:59.593442  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:59.593506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:59.627521  301425 cri.go:89] found id: ""
	I0729 13:40:59.627555  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.627566  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:59.627578  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:59.627597  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:59.677497  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:59.677538  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:59.692116  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:59.692150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:59.759344  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:59.759369  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:59.759382  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:59.840380  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:59.840423  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:57.166964  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:59.666395  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:01.667229  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.323708  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.323995  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.325049  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.328293  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.826414  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.380718  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:02.394436  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:02.394497  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:02.433283  301425 cri.go:89] found id: ""
	I0729 13:41:02.433313  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.433323  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:02.433332  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:02.433393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:02.467206  301425 cri.go:89] found id: ""
	I0729 13:41:02.467232  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.467241  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:02.467247  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:02.467300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:02.502743  301425 cri.go:89] found id: ""
	I0729 13:41:02.502774  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.502783  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:02.502790  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:02.502844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:02.536415  301425 cri.go:89] found id: ""
	I0729 13:41:02.536449  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.536462  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:02.536470  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:02.536527  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:02.570572  301425 cri.go:89] found id: ""
	I0729 13:41:02.570610  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.570621  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:02.570629  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:02.570702  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:02.606251  301425 cri.go:89] found id: ""
	I0729 13:41:02.606277  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.606285  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:02.606292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:02.606345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:02.644637  301425 cri.go:89] found id: ""
	I0729 13:41:02.644664  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.644675  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:02.644683  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:02.644750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:02.679493  301425 cri.go:89] found id: ""
	I0729 13:41:02.679519  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.679527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:02.679537  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:02.679553  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:02.734865  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:02.734896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:02.787929  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:02.787962  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:02.801317  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:02.801344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:02.867838  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:02.867862  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:02.867877  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:05.451323  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:05.465262  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:05.465338  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:05.499797  301425 cri.go:89] found id: ""
	I0729 13:41:05.499827  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.499837  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:05.499845  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:05.499912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:05.534363  301425 cri.go:89] found id: ""
	I0729 13:41:05.534403  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.534416  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:05.534424  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:05.534483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:05.571366  301425 cri.go:89] found id: ""
	I0729 13:41:05.571397  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.571408  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:05.571416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:05.571481  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:05.611301  301425 cri.go:89] found id: ""
	I0729 13:41:05.611335  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.611346  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:05.611355  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:05.611422  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:05.650698  301425 cri.go:89] found id: ""
	I0729 13:41:05.650738  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.650750  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:05.650758  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:05.650823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:05.686166  301425 cri.go:89] found id: ""
	I0729 13:41:05.686204  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.686216  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:05.686225  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:05.686279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:05.724567  301425 cri.go:89] found id: ""
	I0729 13:41:05.724604  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.724616  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:05.724628  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:05.724691  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:05.760401  301425 cri.go:89] found id: ""
	I0729 13:41:05.760430  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.760438  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:05.760448  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:05.760464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:05.811654  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:05.811698  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:05.827189  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:05.827226  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:05.899612  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:05.899636  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:05.899654  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:04.168533  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.665694  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:04.325443  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.824244  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.325499  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:07.326413  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.982384  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:05.982425  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.527609  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:08.542024  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:08.542086  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:08.576313  301425 cri.go:89] found id: ""
	I0729 13:41:08.576340  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.576348  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:08.576354  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:08.576406  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:08.609996  301425 cri.go:89] found id: ""
	I0729 13:41:08.610027  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.610038  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:08.610045  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:08.610111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:08.643722  301425 cri.go:89] found id: ""
	I0729 13:41:08.643750  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.643758  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:08.643765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:08.643815  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:08.679331  301425 cri.go:89] found id: ""
	I0729 13:41:08.679367  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.679378  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:08.679388  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:08.679459  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:08.718348  301425 cri.go:89] found id: ""
	I0729 13:41:08.718376  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.718384  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:08.718390  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:08.718444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:08.758086  301425 cri.go:89] found id: ""
	I0729 13:41:08.758128  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.758140  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:08.758150  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:08.758225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:08.794304  301425 cri.go:89] found id: ""
	I0729 13:41:08.794333  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.794345  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:08.794354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:08.794415  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:08.835448  301425 cri.go:89] found id: ""
	I0729 13:41:08.835477  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.835486  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:08.835495  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:08.835508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:08.923886  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:08.923931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.963921  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:08.963957  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:09.013852  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:09.013893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:09.027838  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:09.027872  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:09.097864  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:08.669271  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.165979  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:08.824724  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:10.825582  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:09.327071  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.826906  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.598762  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:11.612789  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:11.612903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:11.650029  301425 cri.go:89] found id: ""
	I0729 13:41:11.650063  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.650074  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:11.650084  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:11.650152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:11.687479  301425 cri.go:89] found id: ""
	I0729 13:41:11.687510  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.687520  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:11.687527  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:11.687593  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:11.723788  301425 cri.go:89] found id: ""
	I0729 13:41:11.723816  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.723824  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:11.723830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:11.723878  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:11.760304  301425 cri.go:89] found id: ""
	I0729 13:41:11.760341  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.760353  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:11.760361  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:11.760429  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:11.794175  301425 cri.go:89] found id: ""
	I0729 13:41:11.794202  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.794210  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:11.794216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:11.794276  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:11.830653  301425 cri.go:89] found id: ""
	I0729 13:41:11.830679  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.830689  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:11.830697  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:11.830755  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:11.869360  301425 cri.go:89] found id: ""
	I0729 13:41:11.869391  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.869403  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:11.869410  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:11.869473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:11.904164  301425 cri.go:89] found id: ""
	I0729 13:41:11.904195  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.904206  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:11.904218  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:11.904236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:11.979031  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:11.979054  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:11.979069  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:12.064215  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:12.064254  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:12.101854  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:12.101896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:12.152327  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:12.152362  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:14.668032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:14.683118  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:14.683182  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:14.722574  301425 cri.go:89] found id: ""
	I0729 13:41:14.722602  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.722612  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:14.722619  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:14.722686  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:14.759047  301425 cri.go:89] found id: ""
	I0729 13:41:14.759084  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.759094  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:14.759099  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:14.759156  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:14.794363  301425 cri.go:89] found id: ""
	I0729 13:41:14.794400  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.794411  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:14.794418  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:14.794488  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:14.831542  301425 cri.go:89] found id: ""
	I0729 13:41:14.831579  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.831586  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:14.831592  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:14.831650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:14.878710  301425 cri.go:89] found id: ""
	I0729 13:41:14.878745  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.878758  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:14.878765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:14.878824  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:14.937804  301425 cri.go:89] found id: ""
	I0729 13:41:14.937837  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.937847  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:14.937856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:14.937923  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:14.985616  301425 cri.go:89] found id: ""
	I0729 13:41:14.985649  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.985658  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:14.985665  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:14.985737  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:15.023210  301425 cri.go:89] found id: ""
	I0729 13:41:15.023248  301425 logs.go:276] 0 containers: []
	W0729 13:41:15.023261  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:15.023273  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:15.023288  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:15.072549  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:15.072587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:15.086624  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:15.086653  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:15.155391  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:15.155412  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:15.155426  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:15.237480  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:15.237535  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:13.666473  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.666831  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:13.324177  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.324419  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:14.326023  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:16.826314  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.779568  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:17.794163  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:17.794225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:17.831416  301425 cri.go:89] found id: ""
	I0729 13:41:17.831446  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.831456  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:17.831463  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:17.831519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:17.868713  301425 cri.go:89] found id: ""
	I0729 13:41:17.868740  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.868752  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:17.868758  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:17.868834  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:17.913159  301425 cri.go:89] found id: ""
	I0729 13:41:17.913200  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.913211  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:17.913221  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:17.913291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:17.947528  301425 cri.go:89] found id: ""
	I0729 13:41:17.947559  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.947567  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:17.947573  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:17.947693  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:17.982280  301425 cri.go:89] found id: ""
	I0729 13:41:17.982314  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.982323  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:17.982330  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:17.982407  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:18.023729  301425 cri.go:89] found id: ""
	I0729 13:41:18.023767  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.023776  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:18.023783  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:18.023847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:18.061594  301425 cri.go:89] found id: ""
	I0729 13:41:18.061629  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.061637  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:18.061642  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:18.061694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:18.095705  301425 cri.go:89] found id: ""
	I0729 13:41:18.095735  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.095745  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:18.095758  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:18.095778  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:18.175843  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:18.175879  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:18.222979  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:18.223015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:18.277265  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:18.277308  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:18.291002  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:18.291037  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:18.373425  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:20.873958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:20.888091  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:20.888153  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:20.925850  301425 cri.go:89] found id: ""
	I0729 13:41:20.925886  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.925894  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:20.925901  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:20.925955  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:20.962725  301425 cri.go:89] found id: ""
	I0729 13:41:20.962762  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.962774  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:20.962782  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:20.962847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:18.166668  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.166993  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.827065  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.325697  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:19.325369  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:21.326574  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.998741  301425 cri.go:89] found id: ""
	I0729 13:41:20.998778  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.998787  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:20.998794  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:20.998842  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:21.036370  301425 cri.go:89] found id: ""
	I0729 13:41:21.036401  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.036410  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:21.036417  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:21.036483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:21.071560  301425 cri.go:89] found id: ""
	I0729 13:41:21.071588  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.071597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:21.071605  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:21.071670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:21.106778  301425 cri.go:89] found id: ""
	I0729 13:41:21.106810  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.106822  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:21.106830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:21.106890  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:21.139901  301425 cri.go:89] found id: ""
	I0729 13:41:21.139926  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.139934  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:21.139940  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:21.140001  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:21.173281  301425 cri.go:89] found id: ""
	I0729 13:41:21.173312  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.173320  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:21.173330  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:21.173344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:21.225055  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:21.225095  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:21.239780  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:21.239864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:21.313460  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:21.313486  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:21.313504  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:21.398557  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:21.398599  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:23.937873  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:23.951595  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:23.951653  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:23.987177  301425 cri.go:89] found id: ""
	I0729 13:41:23.987208  301425 logs.go:276] 0 containers: []
	W0729 13:41:23.987217  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:23.987225  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:23.987324  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:24.030197  301425 cri.go:89] found id: ""
	I0729 13:41:24.030251  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.030264  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:24.030272  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:24.030339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:24.068031  301425 cri.go:89] found id: ""
	I0729 13:41:24.068061  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.068074  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:24.068081  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:24.068154  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:24.107192  301425 cri.go:89] found id: ""
	I0729 13:41:24.107221  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.107232  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:24.107239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:24.107304  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:24.143154  301425 cri.go:89] found id: ""
	I0729 13:41:24.143182  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.143190  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:24.143196  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:24.143248  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:24.181268  301425 cri.go:89] found id: ""
	I0729 13:41:24.181296  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.181304  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:24.181311  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:24.181370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:24.215248  301425 cri.go:89] found id: ""
	I0729 13:41:24.215284  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.215293  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:24.215299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:24.215363  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:24.250796  301425 cri.go:89] found id: ""
	I0729 13:41:24.250822  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.250831  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:24.250841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:24.250853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:24.305841  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:24.305883  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:24.320182  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:24.320214  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:24.389667  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:24.389690  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:24.389707  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:24.471435  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:24.471479  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:22.665718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.166432  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:22.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:24.826598  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:26.828504  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:23.825754  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.834253  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:28.329733  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:27.014508  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:27.029318  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:27.029382  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:27.064115  301425 cri.go:89] found id: ""
	I0729 13:41:27.064150  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.064161  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:27.064169  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:27.064250  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:27.099081  301425 cri.go:89] found id: ""
	I0729 13:41:27.099110  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.099123  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:27.099131  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:27.099197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:27.132475  301425 cri.go:89] found id: ""
	I0729 13:41:27.132506  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.132518  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:27.132527  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:27.132595  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:27.168924  301425 cri.go:89] found id: ""
	I0729 13:41:27.168948  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.168956  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:27.168962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:27.169015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:27.204052  301425 cri.go:89] found id: ""
	I0729 13:41:27.204082  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.204094  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:27.204109  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:27.204170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:27.238355  301425 cri.go:89] found id: ""
	I0729 13:41:27.238383  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.238391  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:27.238397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:27.238496  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:27.276104  301425 cri.go:89] found id: ""
	I0729 13:41:27.276139  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.276150  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:27.276157  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:27.276222  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:27.308612  301425 cri.go:89] found id: ""
	I0729 13:41:27.308643  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.308654  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:27.308667  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:27.308683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:27.362472  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:27.362511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:27.376349  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:27.376383  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:27.458450  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:27.458472  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:27.458486  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:27.536405  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:27.536445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:30.076285  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:30.091308  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:30.091386  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:30.138335  301425 cri.go:89] found id: ""
	I0729 13:41:30.138369  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.138381  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:30.138389  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:30.138454  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:30.176395  301425 cri.go:89] found id: ""
	I0729 13:41:30.176425  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.176435  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:30.176443  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:30.176495  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:30.214990  301425 cri.go:89] found id: ""
	I0729 13:41:30.215027  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.215035  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:30.215041  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:30.215090  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:30.252051  301425 cri.go:89] found id: ""
	I0729 13:41:30.252080  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.252088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:30.252094  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:30.252155  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:30.287210  301425 cri.go:89] found id: ""
	I0729 13:41:30.287240  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.287249  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:30.287254  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:30.287337  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:30.322813  301425 cri.go:89] found id: ""
	I0729 13:41:30.322842  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.322851  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:30.322857  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:30.322924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:30.358697  301425 cri.go:89] found id: ""
	I0729 13:41:30.358730  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.358738  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:30.358744  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:30.358804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:30.394252  301425 cri.go:89] found id: ""
	I0729 13:41:30.394283  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.394294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:30.394305  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:30.394321  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:30.446777  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:30.446820  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:30.461564  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:30.461605  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:30.537918  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:30.537942  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:30.537958  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:30.613821  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:30.613865  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:27.167654  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.666133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.323396  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:31.324718  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:30.825879  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:32.826458  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.154081  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:33.168252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:33.168353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:33.205675  301425 cri.go:89] found id: ""
	I0729 13:41:33.205708  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.205719  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:33.205727  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:33.205799  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:33.240556  301425 cri.go:89] found id: ""
	I0729 13:41:33.240582  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.240590  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:33.240596  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:33.240644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:33.276662  301425 cri.go:89] found id: ""
	I0729 13:41:33.276690  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.276698  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:33.276704  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:33.276773  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:33.318631  301425 cri.go:89] found id: ""
	I0729 13:41:33.318667  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.318677  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:33.318685  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:33.318762  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:33.354372  301425 cri.go:89] found id: ""
	I0729 13:41:33.354403  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.354412  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:33.354421  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:33.354475  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:33.389309  301425 cri.go:89] found id: ""
	I0729 13:41:33.389337  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.389346  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:33.389352  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:33.389404  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:33.423689  301425 cri.go:89] found id: ""
	I0729 13:41:33.423732  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.423745  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:33.423753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:33.423823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:33.457556  301425 cri.go:89] found id: ""
	I0729 13:41:33.457593  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.457605  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:33.457618  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:33.457634  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:33.534377  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:33.534416  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:33.579646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:33.579689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:33.629784  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:33.629819  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:33.643878  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:33.643912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:33.716446  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:32.167152  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:34.666054  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.667479  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.823726  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.824199  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.324827  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.325672  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.216598  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:36.229904  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:36.230003  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:36.263721  301425 cri.go:89] found id: ""
	I0729 13:41:36.263752  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.263771  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:36.263786  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:36.263838  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:36.297900  301425 cri.go:89] found id: ""
	I0729 13:41:36.297932  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.297950  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:36.297958  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:36.298023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:36.338037  301425 cri.go:89] found id: ""
	I0729 13:41:36.338064  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.338072  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:36.338078  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:36.338125  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:36.375334  301425 cri.go:89] found id: ""
	I0729 13:41:36.375362  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.375370  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:36.375375  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:36.375426  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:36.410760  301425 cri.go:89] found id: ""
	I0729 13:41:36.410794  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.410805  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:36.410813  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:36.410888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:36.445247  301425 cri.go:89] found id: ""
	I0729 13:41:36.445280  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.445291  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:36.445300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:36.445364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:36.487183  301425 cri.go:89] found id: ""
	I0729 13:41:36.487214  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.487221  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:36.487228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:36.487301  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:36.522407  301425 cri.go:89] found id: ""
	I0729 13:41:36.522433  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.522442  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:36.522453  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:36.522468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:36.537163  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:36.537197  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:36.608334  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:36.608361  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:36.608376  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:36.689026  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:36.689074  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:36.728580  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:36.728618  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.279605  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:39.293259  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:39.293320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:39.329070  301425 cri.go:89] found id: ""
	I0729 13:41:39.329095  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.329103  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:39.329109  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:39.329160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:39.362992  301425 cri.go:89] found id: ""
	I0729 13:41:39.363023  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.363032  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:39.363038  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:39.363100  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:39.403094  301425 cri.go:89] found id: ""
	I0729 13:41:39.403128  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.403140  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:39.403147  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:39.403201  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:39.435761  301425 cri.go:89] found id: ""
	I0729 13:41:39.435795  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.435806  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:39.435814  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:39.435881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:39.468299  301425 cri.go:89] found id: ""
	I0729 13:41:39.468332  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.468341  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:39.468349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:39.468417  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:39.505114  301425 cri.go:89] found id: ""
	I0729 13:41:39.505149  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.505162  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:39.505172  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:39.505234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:39.536942  301425 cri.go:89] found id: ""
	I0729 13:41:39.536975  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.536986  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:39.536994  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:39.537064  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:39.577394  301425 cri.go:89] found id: ""
	I0729 13:41:39.577427  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.577439  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:39.577451  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:39.577468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.631143  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:39.631184  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:39.645020  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:39.645047  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:39.718256  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:39.718283  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:39.718297  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:39.801990  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:39.802036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:39.166762  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.167646  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.824966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.825836  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.324009  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.327169  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.826091  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.347066  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:42.359902  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:42.359983  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:42.395494  301425 cri.go:89] found id: ""
	I0729 13:41:42.395529  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.395540  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:42.395548  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:42.395611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:42.429305  301425 cri.go:89] found id: ""
	I0729 13:41:42.429334  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.429343  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:42.429350  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:42.429401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:42.466902  301425 cri.go:89] found id: ""
	I0729 13:41:42.466931  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.466942  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:42.466949  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:42.467017  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:42.504582  301425 cri.go:89] found id: ""
	I0729 13:41:42.504618  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.504628  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:42.504652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:42.504717  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:42.539649  301425 cri.go:89] found id: ""
	I0729 13:41:42.539676  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.539686  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:42.539695  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:42.539758  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:42.579209  301425 cri.go:89] found id: ""
	I0729 13:41:42.579238  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.579249  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:42.579257  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:42.579320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:42.614832  301425 cri.go:89] found id: ""
	I0729 13:41:42.614861  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.614869  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:42.614874  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:42.614925  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:42.651837  301425 cri.go:89] found id: ""
	I0729 13:41:42.651865  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.651873  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:42.651883  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:42.651899  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:42.707149  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:42.707190  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:42.720990  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:42.721043  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:42.789818  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:42.789849  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:42.789867  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:42.871880  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:42.871934  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.416172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:45.428923  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:45.428994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:45.466667  301425 cri.go:89] found id: ""
	I0729 13:41:45.466699  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.466710  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:45.466717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:45.466783  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:45.501779  301425 cri.go:89] found id: ""
	I0729 13:41:45.501813  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.501825  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:45.501832  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:45.501896  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:45.537507  301425 cri.go:89] found id: ""
	I0729 13:41:45.537537  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.537547  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:45.537554  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:45.537619  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:45.575430  301425 cri.go:89] found id: ""
	I0729 13:41:45.575460  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.575467  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:45.575474  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:45.575523  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:45.613009  301425 cri.go:89] found id: ""
	I0729 13:41:45.613038  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.613047  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:45.613053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:45.613103  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:45.650734  301425 cri.go:89] found id: ""
	I0729 13:41:45.650767  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.650778  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:45.650786  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:45.650853  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:45.684301  301425 cri.go:89] found id: ""
	I0729 13:41:45.684332  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.684341  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:45.684349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:45.684416  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:45.719861  301425 cri.go:89] found id: ""
	I0729 13:41:45.719901  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.719911  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:45.719921  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:45.719936  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:45.800422  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:45.800464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.842460  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:45.842493  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:45.897388  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:45.897430  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:45.911554  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:45.911587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:41:43.665771  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.666196  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:44.325813  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:46.824774  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:43.828518  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.830106  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:48.325196  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	W0729 13:41:45.984435  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.485014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:48.498038  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:48.498110  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:48.534248  301425 cri.go:89] found id: ""
	I0729 13:41:48.534280  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.534291  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:48.534299  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:48.534362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:48.572411  301425 cri.go:89] found id: ""
	I0729 13:41:48.572445  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.572457  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:48.572465  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:48.572524  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:48.612345  301425 cri.go:89] found id: ""
	I0729 13:41:48.612373  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.612381  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:48.612387  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:48.612450  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:48.650334  301425 cri.go:89] found id: ""
	I0729 13:41:48.650385  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.650395  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:48.650401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:48.650466  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:48.687460  301425 cri.go:89] found id: ""
	I0729 13:41:48.687490  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.687501  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:48.687508  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:48.687572  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:48.735028  301425 cri.go:89] found id: ""
	I0729 13:41:48.735064  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.735077  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:48.735085  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:48.735142  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:48.771175  301425 cri.go:89] found id: ""
	I0729 13:41:48.771209  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.771220  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:48.771228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:48.771300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:48.808267  301425 cri.go:89] found id: ""
	I0729 13:41:48.808295  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.808304  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:48.808314  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:48.808328  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:48.850520  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:48.850557  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:48.902563  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:48.902612  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:48.919082  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:48.919114  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:48.999185  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.999213  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:48.999241  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:48.166020  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:49.323402  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.326596  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.825399  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:52.831823  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.579922  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:51.593149  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:51.593213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:51.626302  301425 cri.go:89] found id: ""
	I0729 13:41:51.626330  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.626338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:51.626344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:51.626393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:51.659551  301425 cri.go:89] found id: ""
	I0729 13:41:51.659578  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.659586  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:51.659592  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:51.659642  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:51.696842  301425 cri.go:89] found id: ""
	I0729 13:41:51.696868  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.696876  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:51.696882  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:51.696937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:51.737209  301425 cri.go:89] found id: ""
	I0729 13:41:51.737237  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.737246  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:51.737253  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:51.737317  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:51.772782  301425 cri.go:89] found id: ""
	I0729 13:41:51.772829  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.772842  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:51.772850  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:51.772921  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:51.806649  301425 cri.go:89] found id: ""
	I0729 13:41:51.806679  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.806690  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:51.806698  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:51.806771  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:51.848950  301425 cri.go:89] found id: ""
	I0729 13:41:51.848978  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.848989  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:51.848997  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:51.849065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:51.884875  301425 cri.go:89] found id: ""
	I0729 13:41:51.884902  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.884910  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:51.884920  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:51.884932  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:51.964282  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:51.964322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:52.004218  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:52.004251  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:52.056230  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:52.056266  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.069591  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:52.069622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:52.142552  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:54.643154  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:54.657199  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:54.657259  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:54.694124  301425 cri.go:89] found id: ""
	I0729 13:41:54.694152  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.694159  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:54.694165  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:54.694221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:54.732072  301425 cri.go:89] found id: ""
	I0729 13:41:54.732109  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.732119  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:54.732127  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:54.732194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:54.768257  301425 cri.go:89] found id: ""
	I0729 13:41:54.768294  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.768306  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:54.768314  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:54.768383  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:54.807596  301425 cri.go:89] found id: ""
	I0729 13:41:54.807631  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.807643  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:54.807651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:54.807716  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:54.845107  301425 cri.go:89] found id: ""
	I0729 13:41:54.845134  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.845142  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:54.845148  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:54.845197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:54.880627  301425 cri.go:89] found id: ""
	I0729 13:41:54.880655  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.880667  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:54.880675  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:54.880750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:54.918122  301425 cri.go:89] found id: ""
	I0729 13:41:54.918151  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.918159  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:54.918165  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:54.918219  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:54.956943  301425 cri.go:89] found id: ""
	I0729 13:41:54.956986  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.956999  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:54.957022  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:54.957036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:55.032512  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:55.032547  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:55.032564  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:55.116653  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:55.116699  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:55.177030  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:55.177059  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:55.238789  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:55.238831  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.166339  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:54.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:53.824694  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:56.324761  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:55.324698  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.326135  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.753504  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:57.766354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:57.766436  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:57.802691  301425 cri.go:89] found id: ""
	I0729 13:41:57.802728  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.802740  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:57.802746  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:57.802807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:57.839800  301425 cri.go:89] found id: ""
	I0729 13:41:57.839823  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.839830  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:57.839846  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:57.839902  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:57.881592  301425 cri.go:89] found id: ""
	I0729 13:41:57.881617  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.881625  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:57.881631  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:57.881681  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.916245  301425 cri.go:89] found id: ""
	I0729 13:41:57.916273  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.916282  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:57.916290  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:57.916346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:57.952224  301425 cri.go:89] found id: ""
	I0729 13:41:57.952261  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.952272  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:57.952280  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:57.952340  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:57.985508  301425 cri.go:89] found id: ""
	I0729 13:41:57.985537  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.985548  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:57.985557  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:57.985624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:58.022354  301425 cri.go:89] found id: ""
	I0729 13:41:58.022382  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.022391  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:58.022397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:58.022462  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:58.055865  301425 cri.go:89] found id: ""
	I0729 13:41:58.055891  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.055900  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:58.055914  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:58.055931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:58.069143  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:58.069177  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:58.143137  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:58.143164  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:58.143183  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:58.224631  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:58.224672  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:58.266437  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:58.266470  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:00.819300  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:00.834195  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:00.834258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:00.869660  301425 cri.go:89] found id: ""
	I0729 13:42:00.869697  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.869709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:00.869717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:00.869777  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:00.915601  301425 cri.go:89] found id: ""
	I0729 13:42:00.915630  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.915638  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:00.915644  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:00.915694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:00.956981  301425 cri.go:89] found id: ""
	I0729 13:42:00.957020  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.957028  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:00.957034  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:00.957094  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.166038  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.666455  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.666824  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:58.824729  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.825513  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.825074  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.826480  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.995761  301425 cri.go:89] found id: ""
	I0729 13:42:00.995793  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.995801  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:00.995817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:00.995869  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:01.047668  301425 cri.go:89] found id: ""
	I0729 13:42:01.047699  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.047707  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:01.047713  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:01.047787  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:01.085178  301425 cri.go:89] found id: ""
	I0729 13:42:01.085209  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.085217  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:01.085224  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:01.085278  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:01.125282  301425 cri.go:89] found id: ""
	I0729 13:42:01.125310  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.125320  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:01.125329  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:01.125396  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:01.165972  301425 cri.go:89] found id: ""
	I0729 13:42:01.166005  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.166021  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:01.166033  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:01.166049  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:01.236500  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:01.236523  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:01.236540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:01.320918  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:01.320959  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:01.366975  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:01.367015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:01.420347  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:01.420389  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:03.936048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:03.949603  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:03.949679  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:03.987529  301425 cri.go:89] found id: ""
	I0729 13:42:03.987557  301425 logs.go:276] 0 containers: []
	W0729 13:42:03.987567  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:03.987574  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:03.987639  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:04.027325  301425 cri.go:89] found id: ""
	I0729 13:42:04.027355  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.027365  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:04.027372  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:04.027437  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:04.063019  301425 cri.go:89] found id: ""
	I0729 13:42:04.063050  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.063059  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:04.063065  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:04.063117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:04.101106  301425 cri.go:89] found id: ""
	I0729 13:42:04.101135  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.101146  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:04.101153  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:04.101242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:04.137186  301425 cri.go:89] found id: ""
	I0729 13:42:04.137219  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.137230  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:04.137238  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:04.137302  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:04.175732  301425 cri.go:89] found id: ""
	I0729 13:42:04.175761  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.175770  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:04.175776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:04.175826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:04.213265  301425 cri.go:89] found id: ""
	I0729 13:42:04.213296  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.213307  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:04.213315  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:04.213381  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:04.248581  301425 cri.go:89] found id: ""
	I0729 13:42:04.248609  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.248617  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:04.248627  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:04.248643  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:04.303277  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:04.303400  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:04.317518  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:04.317547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:04.385209  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:04.385229  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:04.385242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:04.470629  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:04.470680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:04.167299  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.168006  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.324087  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:05.324904  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.826588  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.325326  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:08.326125  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.012455  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:07.028535  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:07.028621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:07.063453  301425 cri.go:89] found id: ""
	I0729 13:42:07.063496  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.063505  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:07.063511  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:07.063582  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:07.098243  301425 cri.go:89] found id: ""
	I0729 13:42:07.098274  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.098284  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:07.098291  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:07.098357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:07.138122  301425 cri.go:89] found id: ""
	I0729 13:42:07.138149  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.138157  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:07.138162  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:07.138213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:07.176772  301425 cri.go:89] found id: ""
	I0729 13:42:07.176814  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.176826  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:07.176835  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:07.176894  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:07.214867  301425 cri.go:89] found id: ""
	I0729 13:42:07.214898  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.214914  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:07.214920  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:07.214979  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:07.253443  301425 cri.go:89] found id: ""
	I0729 13:42:07.253471  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.253481  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:07.253490  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:07.253550  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:07.287284  301425 cri.go:89] found id: ""
	I0729 13:42:07.287326  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.287338  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:07.287349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:07.287411  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:07.330550  301425 cri.go:89] found id: ""
	I0729 13:42:07.330577  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.330588  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:07.330599  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:07.330620  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:07.384226  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:07.384268  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:07.398790  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:07.398817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:07.462868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:07.462893  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:07.462914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:07.538665  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:07.538706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.078452  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:10.091962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:10.092027  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:10.127401  301425 cri.go:89] found id: ""
	I0729 13:42:10.127434  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.127445  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:10.127454  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:10.127531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:10.161088  301425 cri.go:89] found id: ""
	I0729 13:42:10.161117  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.161127  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:10.161134  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:10.161187  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:10.199721  301425 cri.go:89] found id: ""
	I0729 13:42:10.199751  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.199763  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:10.199769  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:10.199821  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:10.237067  301425 cri.go:89] found id: ""
	I0729 13:42:10.237106  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.237120  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:10.237127  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:10.237191  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:10.275863  301425 cri.go:89] found id: ""
	I0729 13:42:10.275894  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.275909  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:10.275918  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:10.275981  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:10.313234  301425 cri.go:89] found id: ""
	I0729 13:42:10.313262  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.313270  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:10.313276  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:10.313334  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:10.353530  301425 cri.go:89] found id: ""
	I0729 13:42:10.353558  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.353569  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:10.353576  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:10.353644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:10.389488  301425 cri.go:89] found id: ""
	I0729 13:42:10.389516  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.389527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:10.389539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:10.389562  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.428705  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:10.428740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:10.484413  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:10.484456  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:10.499203  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:10.499248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:10.570868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:10.570894  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:10.570907  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:08.667158  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:11.166721  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.825638  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.324753  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.326752  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:12.826001  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:13.151788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:13.165297  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:13.165367  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:13.203752  301425 cri.go:89] found id: ""
	I0729 13:42:13.203786  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.203798  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:13.203805  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:13.203874  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:13.240454  301425 cri.go:89] found id: ""
	I0729 13:42:13.240491  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.240499  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:13.240504  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:13.240556  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:13.276508  301425 cri.go:89] found id: ""
	I0729 13:42:13.276536  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.276545  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:13.276553  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:13.276617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:13.311252  301425 cri.go:89] found id: ""
	I0729 13:42:13.311280  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.311291  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:13.311299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:13.311353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:13.351777  301425 cri.go:89] found id: ""
	I0729 13:42:13.351808  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.351817  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:13.351823  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:13.351881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:13.389020  301425 cri.go:89] found id: ""
	I0729 13:42:13.389049  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.389058  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:13.389064  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:13.389126  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:13.424353  301425 cri.go:89] found id: ""
	I0729 13:42:13.424387  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.424395  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:13.424401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:13.424451  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:13.460755  301425 cri.go:89] found id: ""
	I0729 13:42:13.460788  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.460817  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:13.460830  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:13.460850  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:13.500201  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:13.500234  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:13.553319  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:13.553357  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:13.567496  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:13.567529  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:13.644662  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:13.644686  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:13.644700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:13.667287  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.160289  301044 pod_ready.go:81] duration metric: took 4m0.000442608s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:16.160321  301044 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 13:42:16.160342  301044 pod_ready.go:38] duration metric: took 4m7.984743222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:16.160378  301044 kubeadm.go:597] duration metric: took 4m16.091281244s to restartPrimaryControlPlane
	W0729 13:42:16.160459  301044 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:16.160486  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:12.825387  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.826853  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.827679  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.829149  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326337  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326370  300746 pod_ready.go:81] duration metric: took 4m0.007721109s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:17.326383  300746 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:42:17.326392  300746 pod_ready.go:38] duration metric: took 4m8.417741792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:17.326410  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:42:17.326446  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:17.326514  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:17.373993  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.374027  300746 cri.go:89] found id: ""
	I0729 13:42:17.374037  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:17.374118  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.384841  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:17.384929  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:17.422219  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.422253  300746 cri.go:89] found id: ""
	I0729 13:42:17.422263  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:17.422349  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.427319  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:17.427385  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:17.469310  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:17.469336  300746 cri.go:89] found id: ""
	I0729 13:42:17.469347  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:17.469412  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.474501  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:17.474590  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:17.520767  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:17.520808  300746 cri.go:89] found id: ""
	I0729 13:42:17.520818  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:17.520881  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.525543  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:17.525643  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:17.572718  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.572749  300746 cri.go:89] found id: ""
	I0729 13:42:17.572758  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:17.572839  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.577227  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:17.577304  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:17.614076  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.614098  300746 cri.go:89] found id: ""
	I0729 13:42:17.614106  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:17.614153  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.618404  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:17.618479  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:17.666242  300746 cri.go:89] found id: ""
	I0729 13:42:17.666275  300746 logs.go:276] 0 containers: []
	W0729 13:42:17.666285  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:17.666301  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:17.666373  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:17.713379  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:17.713411  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:17.713418  300746 cri.go:89] found id: ""
	I0729 13:42:17.713428  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:17.713493  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.719026  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.723948  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:17.723974  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:17.743561  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:17.743607  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.803393  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:17.803425  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.855689  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:17.855723  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.898327  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:17.898361  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.951024  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:17.951060  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:18.014040  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:18.014082  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:18.159937  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:18.159984  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:18.201626  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:18.201667  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:18.247168  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:18.247211  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:18.291431  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:18.291469  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:18.333636  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:18.333671  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.226602  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:16.242934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:16.243005  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:16.284033  301425 cri.go:89] found id: ""
	I0729 13:42:16.284064  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.284075  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:16.284083  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:16.284152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:16.328362  301425 cri.go:89] found id: ""
	I0729 13:42:16.328388  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.328396  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:16.328402  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:16.328464  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:16.372664  301425 cri.go:89] found id: ""
	I0729 13:42:16.372701  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.372712  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:16.372727  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:16.372818  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:16.416085  301425 cri.go:89] found id: ""
	I0729 13:42:16.416119  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.416130  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:16.416138  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:16.416194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:16.457786  301425 cri.go:89] found id: ""
	I0729 13:42:16.457819  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.457830  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:16.457838  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:16.457903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:16.498929  301425 cri.go:89] found id: ""
	I0729 13:42:16.498962  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.498971  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:16.498979  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:16.499043  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:16.546159  301425 cri.go:89] found id: ""
	I0729 13:42:16.546187  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.546199  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:16.546207  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:16.546270  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:16.585010  301425 cri.go:89] found id: ""
	I0729 13:42:16.585041  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.585052  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:16.585065  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:16.585081  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:16.639033  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:16.639079  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:16.656209  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:16.656242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:16.734835  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:16.734863  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:16.734940  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.818756  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:16.818798  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.370796  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:19.384267  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:19.384354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:19.425595  301425 cri.go:89] found id: ""
	I0729 13:42:19.425629  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.425641  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:19.425650  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:19.425715  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:19.461470  301425 cri.go:89] found id: ""
	I0729 13:42:19.461506  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.461517  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:19.461524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:19.461592  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:19.508232  301425 cri.go:89] found id: ""
	I0729 13:42:19.508265  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.508275  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:19.508283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:19.508360  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:19.546226  301425 cri.go:89] found id: ""
	I0729 13:42:19.546259  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.546275  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:19.546283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:19.546354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:19.581125  301425 cri.go:89] found id: ""
	I0729 13:42:19.581156  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.581167  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:19.581176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:19.581242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:19.619680  301425 cri.go:89] found id: ""
	I0729 13:42:19.619719  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.619728  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:19.619736  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:19.619800  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:19.657096  301425 cri.go:89] found id: ""
	I0729 13:42:19.657126  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.657136  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:19.657142  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:19.657203  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:19.697247  301425 cri.go:89] found id: ""
	I0729 13:42:19.697277  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.697286  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:19.697297  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:19.697312  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:19.714900  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:19.714935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:19.794118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:19.794145  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:19.794161  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:19.907077  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:19.907122  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.949841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:19.949871  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:19.324474  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:21.826117  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:18.858720  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:18.858773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:21.419344  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:21.440121  300746 api_server.go:72] duration metric: took 4m17.790553991s to wait for apiserver process to appear ...
	I0729 13:42:21.440149  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:42:21.440190  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:21.440242  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:21.485874  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:21.485897  300746 cri.go:89] found id: ""
	I0729 13:42:21.485905  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:21.485956  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.490424  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:21.490493  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:21.532174  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:21.532202  300746 cri.go:89] found id: ""
	I0729 13:42:21.532211  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:21.532259  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.536561  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:21.536622  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:21.579375  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:21.579397  300746 cri.go:89] found id: ""
	I0729 13:42:21.579404  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:21.579450  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.584710  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:21.584779  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:21.621437  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.621465  300746 cri.go:89] found id: ""
	I0729 13:42:21.621475  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:21.621536  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.625829  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:21.625898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:21.666063  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:21.666086  300746 cri.go:89] found id: ""
	I0729 13:42:21.666095  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:21.666162  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.670822  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:21.670898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:21.713993  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:21.714022  300746 cri.go:89] found id: ""
	I0729 13:42:21.714032  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:21.714099  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.718967  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:21.719044  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:21.761282  300746 cri.go:89] found id: ""
	I0729 13:42:21.761312  300746 logs.go:276] 0 containers: []
	W0729 13:42:21.761320  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:21.761327  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:21.761390  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:21.810085  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:21.810114  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:21.810121  300746 cri.go:89] found id: ""
	I0729 13:42:21.810130  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:21.810185  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.814713  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.819968  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:21.819996  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:21.834798  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:21.834823  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:21.957963  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:21.958000  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.995345  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:21.995376  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:22.037737  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:22.037773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:22.074774  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:22.074813  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:22.123172  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.123205  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.181432  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:22.181473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:22.237128  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:22.237162  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:22.285733  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:22.285766  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:22.328258  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:22.328291  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:22.381239  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.381276  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:22.840466  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:22.840504  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:22.515296  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:22.529187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:22.529286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:22.573033  301425 cri.go:89] found id: ""
	I0729 13:42:22.573070  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.573082  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:22.573091  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:22.573152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:22.608443  301425 cri.go:89] found id: ""
	I0729 13:42:22.608476  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.608489  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:22.608496  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:22.608566  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:22.641672  301425 cri.go:89] found id: ""
	I0729 13:42:22.641704  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.641716  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:22.641724  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:22.641781  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:22.673902  301425 cri.go:89] found id: ""
	I0729 13:42:22.673934  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.673944  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:22.673952  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:22.674012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:22.715131  301425 cri.go:89] found id: ""
	I0729 13:42:22.715165  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.715179  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:22.715187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:22.715251  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:22.748807  301425 cri.go:89] found id: ""
	I0729 13:42:22.748838  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.748848  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:22.748856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:22.748924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:22.781972  301425 cri.go:89] found id: ""
	I0729 13:42:22.782002  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.782012  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:22.782021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:22.782088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:22.815791  301425 cri.go:89] found id: ""
	I0729 13:42:22.815823  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.815834  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:22.815848  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.815864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.873595  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:22.873631  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:22.888081  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:22.888123  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:22.959873  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:22.959899  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.959912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:23.040996  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:23.041035  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:25.585159  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:25.604154  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.604240  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.645428  301425 cri.go:89] found id: ""
	I0729 13:42:25.645459  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.645466  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:25.645474  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.645534  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.682758  301425 cri.go:89] found id: ""
	I0729 13:42:25.682785  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.682793  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:25.682799  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.682864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.724297  301425 cri.go:89] found id: ""
	I0729 13:42:25.724330  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.724341  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:25.724349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.724401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.761124  301425 cri.go:89] found id: ""
	I0729 13:42:25.761157  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.761168  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:25.761177  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.761229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.802698  301425 cri.go:89] found id: ""
	I0729 13:42:25.802728  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.802741  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:25.802750  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.802804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.840472  301425 cri.go:89] found id: ""
	I0729 13:42:25.840499  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.840509  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:25.840516  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.840586  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.875217  301425 cri.go:89] found id: ""
	I0729 13:42:25.875255  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.875267  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.875273  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:25.875345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:25.919895  301425 cri.go:89] found id: ""
	I0729 13:42:25.919937  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.919948  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:25.919963  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.919988  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:24.324138  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:26.324843  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:25.399606  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:42:25.405339  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:42:25.406585  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:42:25.406607  300746 api_server.go:131] duration metric: took 3.966451518s to wait for apiserver health ...
	I0729 13:42:25.406615  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:42:25.406640  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.406686  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.442039  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:25.442068  300746 cri.go:89] found id: ""
	I0729 13:42:25.442079  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:25.442140  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.446769  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.446830  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.482122  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:25.482144  300746 cri.go:89] found id: ""
	I0729 13:42:25.482156  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:25.482211  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.486666  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.486729  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.534553  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:25.534584  300746 cri.go:89] found id: ""
	I0729 13:42:25.534595  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:25.534657  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.539546  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.539624  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.577538  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.577562  300746 cri.go:89] found id: ""
	I0729 13:42:25.577572  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:25.577635  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.582377  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.582457  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.628918  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:25.628945  300746 cri.go:89] found id: ""
	I0729 13:42:25.628955  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:25.629027  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.633502  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.633592  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.673133  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.673156  300746 cri.go:89] found id: ""
	I0729 13:42:25.673163  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:25.673210  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.677905  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.677994  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.724757  300746 cri.go:89] found id: ""
	I0729 13:42:25.724780  300746 logs.go:276] 0 containers: []
	W0729 13:42:25.724805  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.724813  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:25.724887  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:25.775101  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.775130  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:25.775136  300746 cri.go:89] found id: ""
	I0729 13:42:25.775144  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:25.775219  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.782008  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.787032  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:25.787064  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.834985  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:25.835026  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.897295  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:25.897338  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.938020  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.938053  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:26.002775  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:26.002808  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:26.021431  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:26.021473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:26.071861  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:26.071898  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:26.130018  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:26.130057  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:26.170233  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:26.170290  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:26.207687  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.207718  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.600518  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:26.600575  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:26.707024  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:26.707074  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:26.753205  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.753240  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:29.302597  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:42:29.302626  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.302630  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.302634  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.302638  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.302641  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.302644  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.302649  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.302654  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.302661  300746 system_pods.go:74] duration metric: took 3.896040202s to wait for pod list to return data ...
	I0729 13:42:29.302670  300746 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:42:29.305640  300746 default_sa.go:45] found service account: "default"
	I0729 13:42:29.305668  300746 default_sa.go:55] duration metric: took 2.989028ms for default service account to be created ...
	I0729 13:42:29.305679  300746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:42:29.310472  300746 system_pods.go:86] 8 kube-system pods found
	I0729 13:42:29.310495  300746 system_pods.go:89] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.310500  300746 system_pods.go:89] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.310505  300746 system_pods.go:89] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.310509  300746 system_pods.go:89] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.310513  300746 system_pods.go:89] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.310517  300746 system_pods.go:89] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.310523  300746 system_pods.go:89] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.310528  300746 system_pods.go:89] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.310536  300746 system_pods.go:126] duration metric: took 4.851477ms to wait for k8s-apps to be running ...
	I0729 13:42:29.310545  300746 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:42:29.310580  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.329123  300746 system_svc.go:56] duration metric: took 18.569258ms WaitForService to wait for kubelet
	I0729 13:42:29.329155  300746 kubeadm.go:582] duration metric: took 4m25.679589837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:42:29.329182  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:42:29.332696  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:42:29.332726  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:42:29.332741  300746 node_conditions.go:105] duration metric: took 3.551684ms to run NodePressure ...
	I0729 13:42:29.332756  300746 start.go:241] waiting for startup goroutines ...
	I0729 13:42:29.332770  300746 start.go:246] waiting for cluster config update ...
	I0729 13:42:29.332784  300746 start.go:255] writing updated cluster config ...
	I0729 13:42:29.333168  300746 ssh_runner.go:195] Run: rm -f paused
	I0729 13:42:29.394738  300746 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 13:42:29.396826  300746 out.go:177] * Done! kubectl is now configured to use "no-preload-566777" cluster and "default" namespace by default
	I0729 13:42:25.981964  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:25.982005  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:25.997546  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:25.997576  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:26.075879  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:26.075901  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.075917  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.158552  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.158593  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:28.704328  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:28.718946  301425 kubeadm.go:597] duration metric: took 4m3.546660825s to restartPrimaryControlPlane
	W0729 13:42:28.719041  301425 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:28.719086  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:29.251866  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.267009  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:29.277498  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:29.287980  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:29.288003  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:29.288054  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:42:29.297830  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:29.297890  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:29.308263  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:42:29.318332  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:29.318388  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:29.328684  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.339841  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:29.339894  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.351304  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:42:29.363901  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:29.363960  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:29.377255  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:29.453113  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:42:29.453212  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:29.609835  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:29.609970  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:29.610106  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:29.812529  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:29.814455  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:29.814551  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:29.814633  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:29.814727  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:29.814799  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:29.814915  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:29.814979  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:29.815695  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:29.816098  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:29.816602  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:29.817114  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:29.817184  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:29.817266  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:30.122967  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:30.287162  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:30.336346  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:30.516317  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:30.532829  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:30.533732  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:30.533809  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:30.672345  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:30.674334  301425 out.go:204]   - Booting up control plane ...
	I0729 13:42:30.674492  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:30.681661  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:30.681784  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:30.683350  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:30.687290  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:42:28.327998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:30.823998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:32.824105  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:34.825475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:37.324435  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:39.824490  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:42.323305  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:44.329376  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:46.823645  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:47.980926  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.820407091s)
	I0729 13:42:47.981010  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:47.997344  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:48.007813  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:48.017519  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:48.017538  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:48.017579  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:42:48.028739  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:48.028819  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:48.038417  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:42:48.047864  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:48.047921  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:48.057408  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.066977  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:48.067040  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.077017  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:42:48.087204  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:48.087267  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:48.097659  301044 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:48.149712  301044 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:42:48.149883  301044 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:48.277280  301044 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:48.277441  301044 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:48.277578  301044 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:48.505523  301044 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:48.507718  301044 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:48.507827  301044 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:48.507941  301044 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:48.508049  301044 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:48.508139  301044 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:48.508245  301044 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:48.508334  301044 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:48.508431  301044 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:48.508518  301044 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:48.508622  301044 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:48.508740  301044 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:48.508824  301044 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:48.508949  301044 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:48.545220  301044 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:48.620528  301044 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:42:48.781015  301044 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:49.039301  301044 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:49.104540  301044 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:49.105022  301044 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:49.107524  301044 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:49.109579  301044 out.go:204]   - Booting up control plane ...
	I0729 13:42:49.109698  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:49.109836  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:49.109924  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:49.129789  301044 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:49.130766  301044 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:49.130844  301044 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:49.272901  301044 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:42:49.273017  301044 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:42:50.274804  301044 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001903151s
	I0729 13:42:50.274906  301044 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:42:48.825621  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:51.324025  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.276427  301044 kubeadm.go:310] [api-check] The API server is healthy after 5.001280529s
	I0729 13:42:55.289666  301044 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:42:55.309747  301044 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:42:55.343304  301044 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:42:55.343537  301044 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-972693 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:42:55.366319  301044 kubeadm.go:310] [bootstrap-token] Using token: bvsox4.ktqddck1jfi3aduz
	I0729 13:42:55.367592  301044 out.go:204]   - Configuring RBAC rules ...
	I0729 13:42:55.367695  301044 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:42:55.380118  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:42:55.393704  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:42:55.397859  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:42:55.401567  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:42:55.407851  301044 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:42:55.684714  301044 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:42:56.128597  301044 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:42:56.683879  301044 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:42:56.685050  301044 kubeadm.go:310] 
	I0729 13:42:56.685127  301044 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:42:56.685137  301044 kubeadm.go:310] 
	I0729 13:42:56.685216  301044 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:42:56.685226  301044 kubeadm.go:310] 
	I0729 13:42:56.685252  301044 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:42:56.685335  301044 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:42:56.685414  301044 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:42:56.685422  301044 kubeadm.go:310] 
	I0729 13:42:56.685527  301044 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:42:56.685550  301044 kubeadm.go:310] 
	I0729 13:42:56.685607  301044 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:42:56.685617  301044 kubeadm.go:310] 
	I0729 13:42:56.685684  301044 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:42:56.685800  301044 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:42:56.685916  301044 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:42:56.685933  301044 kubeadm.go:310] 
	I0729 13:42:56.686048  301044 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:42:56.686149  301044 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:42:56.686162  301044 kubeadm.go:310] 
	I0729 13:42:56.686277  301044 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686416  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 \
	I0729 13:42:56.686449  301044 kubeadm.go:310] 	--control-plane 
	I0729 13:42:56.686462  301044 kubeadm.go:310] 
	I0729 13:42:56.686562  301044 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:42:56.686571  301044 kubeadm.go:310] 
	I0729 13:42:56.686687  301044 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686839  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 
	I0729 13:42:56.687046  301044 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:42:56.687123  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:42:56.687140  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:42:56.689013  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:42:53.324453  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.326475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:56.690282  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:42:56.703026  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:42:56.722677  301044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-972693 minikube.k8s.io/updated_at=2024_07_29T13_42_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=default-k8s-diff-port-972693 minikube.k8s.io/primary=true
	I0729 13:42:56.738921  301044 ops.go:34] apiserver oom_adj: -16
	I0729 13:42:56.902369  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.402842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.902902  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.403358  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.903112  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.402540  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.902605  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.402440  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.903011  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:01.403295  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.823966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:00.323772  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:01.818493  300705 pod_ready.go:81] duration metric: took 4m0.000972043s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:43:01.818528  300705 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:43:01.818537  300705 pod_ready.go:38] duration metric: took 4m4.037818748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:01.818555  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:01.818589  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:01.818643  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:01.874334  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:01.874359  300705 cri.go:89] found id: ""
	I0729 13:43:01.874369  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:01.874439  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.879122  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:01.879214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:01.919779  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:01.919804  300705 cri.go:89] found id: ""
	I0729 13:43:01.919814  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:01.919874  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.924895  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:01.924963  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:01.970365  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:01.970386  300705 cri.go:89] found id: ""
	I0729 13:43:01.970394  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:01.970444  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.975331  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:01.975409  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:02.013029  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.013062  300705 cri.go:89] found id: ""
	I0729 13:43:02.013074  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:02.013136  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.017958  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:02.018019  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:02.062357  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.062385  300705 cri.go:89] found id: ""
	I0729 13:43:02.062394  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:02.062463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.066791  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:02.066841  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:02.103790  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:02.103812  300705 cri.go:89] found id: ""
	I0729 13:43:02.103821  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:02.103882  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.108242  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:02.108293  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:02.151089  300705 cri.go:89] found id: ""
	I0729 13:43:02.151122  300705 logs.go:276] 0 containers: []
	W0729 13:43:02.151133  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:02.151141  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:02.151204  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:02.205700  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:02.205727  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.205732  300705 cri.go:89] found id: ""
	I0729 13:43:02.205741  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:02.205790  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.210332  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.214889  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:02.214913  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:02.229589  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:02.229621  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:02.278361  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:02.278394  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:02.319117  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:02.319146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.357874  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:02.357908  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.402114  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:02.402146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.442480  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:02.442514  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:01.903256  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.403400  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.902925  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.402616  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.903161  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.403255  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.902489  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.402506  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.902530  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:06.402436  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.953914  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:02.953961  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:03.013404  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:03.013441  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:03.151261  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:03.151294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:03.199910  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:03.199964  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:03.257103  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:03.257137  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:03.308519  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:03.308559  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:05.857929  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:05.878306  300705 api_server.go:72] duration metric: took 4m15.820258046s to wait for apiserver process to appear ...
	I0729 13:43:05.878338  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:05.878383  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:05.878451  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:05.924031  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:05.924071  300705 cri.go:89] found id: ""
	I0729 13:43:05.924083  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:05.924151  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.929284  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:05.929363  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:05.968980  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:05.969003  300705 cri.go:89] found id: ""
	I0729 13:43:05.969010  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:05.969056  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.973451  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:05.973516  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:06.011760  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.011784  300705 cri.go:89] found id: ""
	I0729 13:43:06.011794  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:06.011857  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.016065  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:06.016132  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:06.066319  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.066345  300705 cri.go:89] found id: ""
	I0729 13:43:06.066353  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:06.066420  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.071060  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:06.071120  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:06.117383  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.117405  300705 cri.go:89] found id: ""
	I0729 13:43:06.117413  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:06.117463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.121968  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:06.122053  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:06.156125  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.156151  300705 cri.go:89] found id: ""
	I0729 13:43:06.156160  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:06.156209  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.160301  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:06.160366  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:06.206751  300705 cri.go:89] found id: ""
	I0729 13:43:06.206780  300705 logs.go:276] 0 containers: []
	W0729 13:43:06.206790  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:06.206798  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:06.206860  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:06.248884  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.248918  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:06.248925  300705 cri.go:89] found id: ""
	I0729 13:43:06.248936  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:06.249006  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.253087  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.257229  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:06.257252  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.291495  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:06.291528  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.330190  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:06.330219  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.366500  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:06.366536  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.424871  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:06.424906  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:06.855025  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:06.855069  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:06.870025  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:06.870055  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:06.986590  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:06.986630  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:07.036972  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:07.037007  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:07.092602  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:07.092646  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:07.135326  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:07.135366  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:07.190208  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:07.190247  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:07.241865  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:07.241896  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.902842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.402861  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.903148  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.402619  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.902869  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.403349  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.903277  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.402468  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.535843  301044 kubeadm.go:1113] duration metric: took 13.813154738s to wait for elevateKubeSystemPrivileges
	I0729 13:43:10.535879  301044 kubeadm.go:394] duration metric: took 5m10.527995876s to StartCluster
	I0729 13:43:10.535899  301044 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.535991  301044 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:43:10.538845  301044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.539141  301044 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:43:10.539343  301044 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:43:10.539513  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:43:10.539528  301044 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539556  301044 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539574  301044 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-972693"
	I0729 13:43:10.539587  301044 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-972693"
	I0729 13:43:10.539600  301044 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539623  301044 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.539635  301044 addons.go:243] addon metrics-server should already be in state true
	I0729 13:43:10.539692  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	W0729 13:43:10.539594  301044 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:43:10.539817  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.540342  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540368  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540380  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540399  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540664  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540814  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.542249  301044 out.go:177] * Verifying Kubernetes components...
	I0729 13:43:10.543974  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:43:10.561555  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0729 13:43:10.561585  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I0729 13:43:10.561820  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0729 13:43:10.562096  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562160  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562579  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562694  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562711  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.562750  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562766  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563224  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563236  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563496  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.563516  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563793  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563923  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.563959  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563982  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.564526  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.564781  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.569041  301044 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.569062  301044 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:43:10.569091  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.569443  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.569462  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.580340  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I0729 13:43:10.580852  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.581371  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.581384  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.581724  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.581911  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.583937  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I0729 13:43:10.584108  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.584422  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.584864  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.584881  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.585262  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.585445  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.586285  301044 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:43:10.586973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.587855  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:43:10.587873  301044 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:43:10.587907  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.588885  301044 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:43:10.689091  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:43:10.689558  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:10.689837  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:10.590240  301044 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.590258  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:43:10.590275  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.592026  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42695
	I0729 13:43:10.592306  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.592778  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.592859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.592877  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.593162  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.593295  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.593382  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.593455  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.593663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594055  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.594082  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594233  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.594388  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.594485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.594621  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.594882  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.594892  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.595227  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.595663  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.595680  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.611094  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0729 13:43:10.611617  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.612200  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.612224  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.612600  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.612973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.614541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.614743  301044 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:10.614757  301044 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:43:10.614774  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.617611  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.618064  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.618416  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.618595  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.618754  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.791924  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:43:10.850744  301044 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866102  301044 node_ready.go:49] node "default-k8s-diff-port-972693" has status "Ready":"True"
	I0729 13:43:10.866137  301044 node_ready.go:38] duration metric: took 15.35404ms for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866171  301044 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:10.877661  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:10.958120  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.981335  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:43:10.981363  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:43:10.982804  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:11.145078  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:43:11.145108  301044 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:43:11.236628  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:11.236658  301044 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:43:11.308646  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.315025489s)
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290345752s)
	I0729 13:43:12.273254  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273270  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273283  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273296  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273572  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273589  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273598  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273606  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273704  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273721  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273731  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273739  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.275558  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275601  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275616  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.275624  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275634  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275644  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.309442  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.309473  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.309839  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.309888  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.309909  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.464546  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.155855113s)
	I0729 13:43:12.464601  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.464614  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465037  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465060  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465071  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.465081  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465398  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.465418  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465476  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465494  301044 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-972693"
	I0729 13:43:12.467315  301044 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:43:09.811571  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:43:09.817221  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:43:09.818319  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:09.818342  300705 api_server.go:131] duration metric: took 3.939996032s to wait for apiserver health ...
	I0729 13:43:09.818350  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:09.818373  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:09.818425  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:09.861856  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:09.861883  300705 cri.go:89] found id: ""
	I0729 13:43:09.861894  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:09.861962  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.867142  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:09.867216  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:09.909767  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:09.909795  300705 cri.go:89] found id: ""
	I0729 13:43:09.909808  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:09.909877  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.914410  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:09.914482  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:09.953540  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:09.953568  300705 cri.go:89] found id: ""
	I0729 13:43:09.953578  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:09.953637  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.958140  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:09.958214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:09.999809  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:09.999836  300705 cri.go:89] found id: ""
	I0729 13:43:09.999846  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:09.999911  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.004505  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:10.004587  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:10.049146  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.049173  300705 cri.go:89] found id: ""
	I0729 13:43:10.049182  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:10.049252  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.053631  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:10.053698  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:10.090361  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.090386  300705 cri.go:89] found id: ""
	I0729 13:43:10.090396  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:10.090442  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.095528  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:10.095588  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:10.131892  300705 cri.go:89] found id: ""
	I0729 13:43:10.131925  300705 logs.go:276] 0 containers: []
	W0729 13:43:10.131937  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:10.131944  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:10.132008  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:10.169101  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.169127  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.169133  300705 cri.go:89] found id: ""
	I0729 13:43:10.169142  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:10.169203  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.174716  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.179196  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:10.179217  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:10.222803  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:10.222833  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:10.265944  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:10.265975  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.310266  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:10.310294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.370562  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:10.370611  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.415759  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:10.415803  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:10.467672  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:10.467702  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:10.531249  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:10.531293  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:10.550454  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:10.550485  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:10.709028  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:10.709068  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:10.761048  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:10.761093  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:10.813125  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:10.813169  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.852581  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:10.852608  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:13.725236  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:43:13.725272  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.725279  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.725284  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.725289  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.725293  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.725298  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.725306  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.725312  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.725322  300705 system_pods.go:74] duration metric: took 3.906966083s to wait for pod list to return data ...
	I0729 13:43:13.725335  300705 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:13.727954  300705 default_sa.go:45] found service account: "default"
	I0729 13:43:13.727984  300705 default_sa.go:55] duration metric: took 2.638639ms for default service account to be created ...
	I0729 13:43:13.728032  300705 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:13.733141  300705 system_pods.go:86] 8 kube-system pods found
	I0729 13:43:13.733163  300705 system_pods.go:89] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.733169  300705 system_pods.go:89] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.733173  300705 system_pods.go:89] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.733177  300705 system_pods.go:89] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.733181  300705 system_pods.go:89] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.733185  300705 system_pods.go:89] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.733191  300705 system_pods.go:89] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.733196  300705 system_pods.go:89] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.733205  300705 system_pods.go:126] duration metric: took 5.16021ms to wait for k8s-apps to be running ...
	I0729 13:43:13.733213  300705 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:13.733255  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:13.755011  300705 system_svc.go:56] duration metric: took 21.784065ms WaitForService to wait for kubelet
	I0729 13:43:13.755042  300705 kubeadm.go:582] duration metric: took 4m23.697000108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:13.755068  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:13.758549  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:13.758572  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:13.758586  300705 node_conditions.go:105] duration metric: took 3.512205ms to run NodePressure ...
	I0729 13:43:13.758601  300705 start.go:241] waiting for startup goroutines ...
	I0729 13:43:13.758612  300705 start.go:246] waiting for cluster config update ...
	I0729 13:43:13.758625  300705 start.go:255] writing updated cluster config ...
	I0729 13:43:13.758945  300705 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:13.810333  300705 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:13.812397  300705 out.go:177] * Done! kubectl is now configured to use "embed-certs-135920" cluster and "default" namespace by default
	I0729 13:43:12.468541  301044 addons.go:510] duration metric: took 1.929219306s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:43:12.887280  301044 pod_ready.go:102] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:13.386255  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.386279  301044 pod_ready.go:81] duration metric: took 2.508586907s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.386291  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391278  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.391302  301044 pod_ready.go:81] duration metric: took 5.00403ms for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391313  301044 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396324  301044 pod_ready.go:92] pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.396343  301044 pod_ready.go:81] duration metric: took 5.022707ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396350  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403008  301044 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.403026  301044 pod_ready.go:81] duration metric: took 6.670677ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403035  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407836  301044 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.407856  301044 pod_ready.go:81] duration metric: took 4.814401ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407868  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783140  301044 pod_ready.go:92] pod "kube-proxy-tfsk9" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.783168  301044 pod_ready.go:81] duration metric: took 375.291599ms for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783181  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182560  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:14.182588  301044 pod_ready.go:81] duration metric: took 399.399691ms for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182597  301044 pod_ready.go:38] duration metric: took 3.316409576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:14.182610  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:14.182661  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:14.210715  301044 api_server.go:72] duration metric: took 3.671529553s to wait for apiserver process to appear ...
	I0729 13:43:14.210749  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:14.210790  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:43:14.214886  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:43:14.215773  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:14.215795  301044 api_server.go:131] duration metric: took 5.0389ms to wait for apiserver health ...
	I0729 13:43:14.215802  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:14.386356  301044 system_pods.go:59] 9 kube-system pods found
	I0729 13:43:14.386389  301044 system_pods.go:61] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.386394  301044 system_pods.go:61] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.386398  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.386401  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.386405  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.386409  301044 system_pods.go:61] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.386412  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.386417  301044 system_pods.go:61] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.386420  301044 system_pods.go:61] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.386430  301044 system_pods.go:74] duration metric: took 170.622271ms to wait for pod list to return data ...
	I0729 13:43:14.386437  301044 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:14.582618  301044 default_sa.go:45] found service account: "default"
	I0729 13:43:14.582643  301044 default_sa.go:55] duration metric: took 196.19918ms for default service account to be created ...
	I0729 13:43:14.582652  301044 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:14.785669  301044 system_pods.go:86] 9 kube-system pods found
	I0729 13:43:14.785701  301044 system_pods.go:89] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.785707  301044 system_pods.go:89] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.785711  301044 system_pods.go:89] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.785719  301044 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.785723  301044 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.785727  301044 system_pods.go:89] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.785731  301044 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.785737  301044 system_pods.go:89] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.785741  301044 system_pods.go:89] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.785750  301044 system_pods.go:126] duration metric: took 203.092668ms to wait for k8s-apps to be running ...
	I0729 13:43:14.785756  301044 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:14.785801  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:14.802927  301044 system_svc.go:56] duration metric: took 17.160927ms WaitForService to wait for kubelet
	I0729 13:43:14.802957  301044 kubeadm.go:582] duration metric: took 4.263780375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:14.802977  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:14.983106  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:14.983135  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:14.983146  301044 node_conditions.go:105] duration metric: took 180.164781ms to run NodePressure ...
	I0729 13:43:14.983159  301044 start.go:241] waiting for startup goroutines ...
	I0729 13:43:14.983165  301044 start.go:246] waiting for cluster config update ...
	I0729 13:43:14.983175  301044 start.go:255] writing updated cluster config ...
	I0729 13:43:14.983443  301044 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:15.038438  301044 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:15.040318  301044 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-972693" cluster and "default" namespace by default
	I0729 13:43:15.690809  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:15.691011  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:25.691962  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:25.692244  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:45.693269  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:45.693473  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696107  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:44:25.696300  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696307  301425 kubeadm.go:310] 
	I0729 13:44:25.696341  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:44:25.696400  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:44:25.696419  301425 kubeadm.go:310] 
	I0729 13:44:25.696463  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:44:25.696510  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:44:25.696653  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:44:25.696674  301425 kubeadm.go:310] 
	I0729 13:44:25.696818  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:44:25.696868  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:44:25.696921  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:44:25.696930  301425 kubeadm.go:310] 
	I0729 13:44:25.697076  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:44:25.697192  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:44:25.697206  301425 kubeadm.go:310] 
	I0729 13:44:25.697349  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:44:25.697459  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:44:25.697568  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:44:25.697669  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:44:25.697680  301425 kubeadm.go:310] 
	I0729 13:44:25.698359  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:44:25.698490  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:44:25.698596  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 13:44:25.698771  301425 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 13:44:25.698848  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:44:26.160539  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:44:26.175482  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:44:26.185562  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:44:26.185593  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:44:26.185657  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:44:26.195781  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:44:26.195865  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:44:26.207404  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:44:26.217068  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:44:26.217188  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:44:26.226075  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.234622  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:44:26.234684  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.243756  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:44:26.252630  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:44:26.252695  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:44:26.262846  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:44:26.340215  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:44:26.340318  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:44:26.496049  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:44:26.496199  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:44:26.496327  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:44:26.678135  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:44:26.680089  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:44:26.680173  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:44:26.680257  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:44:26.680378  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:44:26.680470  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:44:26.680570  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:44:26.680653  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:44:26.680751  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:44:26.681022  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:44:26.681519  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:44:26.681876  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:44:26.681994  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:44:26.682083  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:44:26.762680  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:44:26.922517  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:44:26.973731  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:44:27.193064  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:44:27.216477  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:44:27.219036  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:44:27.219293  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:44:27.386424  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:44:27.388194  301425 out.go:204]   - Booting up control plane ...
	I0729 13:44:27.388340  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:44:27.390345  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:44:27.391455  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:44:27.392303  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:44:27.394301  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:45:07.396989  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:45:07.397449  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:07.397719  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:12.397982  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:12.398297  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:22.398751  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:22.399010  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:42.399462  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:42.399675  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398413  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:46:22.398684  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398700  301425 kubeadm.go:310] 
	I0729 13:46:22.398763  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:46:22.398844  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:46:22.398886  301425 kubeadm.go:310] 
	I0729 13:46:22.398948  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:46:22.399002  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:46:22.399132  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:46:22.399145  301425 kubeadm.go:310] 
	I0729 13:46:22.399287  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:46:22.399346  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:46:22.399392  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:46:22.399404  301425 kubeadm.go:310] 
	I0729 13:46:22.399530  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:46:22.399610  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:46:22.399617  301425 kubeadm.go:310] 
	I0729 13:46:22.399735  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:46:22.399844  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:46:22.399943  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:46:22.400021  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:46:22.400035  301425 kubeadm.go:310] 
	I0729 13:46:22.400291  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:46:22.400370  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:46:22.400440  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 13:46:22.400520  301425 kubeadm.go:394] duration metric: took 7m57.286753846s to StartCluster
	I0729 13:46:22.400612  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:46:22.400692  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:46:22.446188  301425 cri.go:89] found id: ""
	I0729 13:46:22.446216  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.446225  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:46:22.446232  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:46:22.446289  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:46:22.484089  301425 cri.go:89] found id: ""
	I0729 13:46:22.484118  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.484128  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:46:22.484135  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:46:22.484197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:46:22.526817  301425 cri.go:89] found id: ""
	I0729 13:46:22.526846  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.526854  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:46:22.526860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:46:22.526912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:46:22.564787  301425 cri.go:89] found id: ""
	I0729 13:46:22.564834  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.564846  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:46:22.564854  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:46:22.564920  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:46:22.601843  301425 cri.go:89] found id: ""
	I0729 13:46:22.601881  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.601892  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:46:22.601900  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:46:22.601980  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:46:22.637420  301425 cri.go:89] found id: ""
	I0729 13:46:22.637448  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.637455  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:46:22.637462  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:46:22.637519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:46:22.672427  301425 cri.go:89] found id: ""
	I0729 13:46:22.672465  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.672476  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:46:22.672485  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:46:22.672549  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:46:22.708256  301425 cri.go:89] found id: ""
	I0729 13:46:22.708285  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.708294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:46:22.708306  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:46:22.708323  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:46:22.819287  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:46:22.819327  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:46:22.859298  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:46:22.859339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:46:22.914290  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:46:22.914342  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:46:22.936919  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:46:22.936951  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:46:23.035889  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0729 13:46:23.035939  301425 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 13:46:23.035991  301425 out.go:239] * 
	W0729 13:46:23.036103  301425 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.036137  301425 out.go:239] * 
	W0729 13:46:23.037370  301425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:46:23.040573  301425 out.go:177] 
	W0729 13:46:23.042130  301425 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.042173  301425 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 13:46:23.042193  301425 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 13:46:23.043539  301425 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.906438725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261135906380164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4fd89b4-02a9-4551-add0-17014787d736 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.907778307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33dd89ad-1eb7-42b3-9cac-0eda81352296 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.907863509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33dd89ad-1eb7-42b3-9cac-0eda81352296 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.908198538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6fa41d01a3bbd092a84b63fdd76e0e4a9cfcb8095cff9783d4dda551a0cd697,PodSandboxId:14fb7737457697938a85cf65bd9088bca53d0c84788afab923b48bcc11202337,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260339582699946,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9da5631b-2e6f-49af-a4d1-47b2bc69778b,},Annotations:map[string]string{io.kubernetes.container.hash: b9f66443,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1,PodSandboxId:803e367761b0f6026783d3479b77e316b28799e211080c73d25a279f5de77ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260335567786994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rgh5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7276884-67e0-41fc-af75-2f8ba96e4c52,},Annotations:map[string]string{io.kubernetes.container.hash: c4df8208,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260328447934116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1,PodSandboxId:239ae06cab4adc1b1a940b99680def754f4049f3578cad5cd5c91761c926e9a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260327699264840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn8bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199ef7b-b5ff-4051-a
bf7-eda86a891508,},Annotations:map[string]string{io.kubernetes.container.hash: d38fafe2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260327703364173,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b
40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee,PodSandboxId:0514a8f6f2fadd61c3da0fe930a0524ba384511d750ad68b61875093716859db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260324104011387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9afbefcfde49d6
4377d69e47d176392f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879,PodSandboxId:51546de0b77e669b3811cdd82ad8ef954886a76dccbdc7c465277ed4b8bec051,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260324105949569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e94c9e43c85bb55d6d45111d97033f81,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a4bd2e15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046,PodSandboxId:741f798f6f138e14978e159f4df096c1682e9eefd26acd95c73f6e45ee08117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260324094702345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95268b823ab24992388d3d2e5120ca4e,},Annotations:map[string]string{io.
kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679,PodSandboxId:c2c437ffdf74016d1129eafa01adade139da217a95d24b78ae3170a5c9c4e0ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260324090830620,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b774d6042fd8fc65469594fd0dce96,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 160782ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33dd89ad-1eb7-42b3-9cac-0eda81352296 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.971084538Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3852edc5-6476-4c3f-adaf-8ff30d212932 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.971280768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3852edc5-6476-4c3f-adaf-8ff30d212932 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.973222004Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=839b8249-14f5-4edb-a2fb-babc6eeda5dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.974183906Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261135974045745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=839b8249-14f5-4edb-a2fb-babc6eeda5dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.975506467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=708475a9-7b3c-4d34-a066-097ef69b833c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.975599230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=708475a9-7b3c-4d34-a066-097ef69b833c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:15 embed-certs-135920 crio[733]: time="2024-07-29 13:52:15.975861793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6fa41d01a3bbd092a84b63fdd76e0e4a9cfcb8095cff9783d4dda551a0cd697,PodSandboxId:14fb7737457697938a85cf65bd9088bca53d0c84788afab923b48bcc11202337,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260339582699946,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9da5631b-2e6f-49af-a4d1-47b2bc69778b,},Annotations:map[string]string{io.kubernetes.container.hash: b9f66443,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1,PodSandboxId:803e367761b0f6026783d3479b77e316b28799e211080c73d25a279f5de77ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260335567786994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rgh5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7276884-67e0-41fc-af75-2f8ba96e4c52,},Annotations:map[string]string{io.kubernetes.container.hash: c4df8208,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260328447934116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1,PodSandboxId:239ae06cab4adc1b1a940b99680def754f4049f3578cad5cd5c91761c926e9a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260327699264840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn8bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199ef7b-b5ff-4051-a
bf7-eda86a891508,},Annotations:map[string]string{io.kubernetes.container.hash: d38fafe2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260327703364173,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b
40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee,PodSandboxId:0514a8f6f2fadd61c3da0fe930a0524ba384511d750ad68b61875093716859db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260324104011387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9afbefcfde49d6
4377d69e47d176392f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879,PodSandboxId:51546de0b77e669b3811cdd82ad8ef954886a76dccbdc7c465277ed4b8bec051,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260324105949569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e94c9e43c85bb55d6d45111d97033f81,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a4bd2e15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046,PodSandboxId:741f798f6f138e14978e159f4df096c1682e9eefd26acd95c73f6e45ee08117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260324094702345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95268b823ab24992388d3d2e5120ca4e,},Annotations:map[string]string{io.
kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679,PodSandboxId:c2c437ffdf74016d1129eafa01adade139da217a95d24b78ae3170a5c9c4e0ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260324090830620,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b774d6042fd8fc65469594fd0dce96,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 160782ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=708475a9-7b3c-4d34-a066-097ef69b833c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.029568502Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fbb2957-661f-49eb-aacb-3fe88eb754e8 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.029667179Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fbb2957-661f-49eb-aacb-3fe88eb754e8 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.031915552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5212cdd6-6ba2-40f3-bc49-63b8d9708840 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.032890794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261136032854313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5212cdd6-6ba2-40f3-bc49-63b8d9708840 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.033494262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94f307a1-e165-4f4f-9109-233473242b28 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.033551531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94f307a1-e165-4f4f-9109-233473242b28 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.034009403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6fa41d01a3bbd092a84b63fdd76e0e4a9cfcb8095cff9783d4dda551a0cd697,PodSandboxId:14fb7737457697938a85cf65bd9088bca53d0c84788afab923b48bcc11202337,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260339582699946,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9da5631b-2e6f-49af-a4d1-47b2bc69778b,},Annotations:map[string]string{io.kubernetes.container.hash: b9f66443,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1,PodSandboxId:803e367761b0f6026783d3479b77e316b28799e211080c73d25a279f5de77ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260335567786994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rgh5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7276884-67e0-41fc-af75-2f8ba96e4c52,},Annotations:map[string]string{io.kubernetes.container.hash: c4df8208,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260328447934116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1,PodSandboxId:239ae06cab4adc1b1a940b99680def754f4049f3578cad5cd5c91761c926e9a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260327699264840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn8bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199ef7b-b5ff-4051-a
bf7-eda86a891508,},Annotations:map[string]string{io.kubernetes.container.hash: d38fafe2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260327703364173,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b
40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee,PodSandboxId:0514a8f6f2fadd61c3da0fe930a0524ba384511d750ad68b61875093716859db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260324104011387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9afbefcfde49d6
4377d69e47d176392f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879,PodSandboxId:51546de0b77e669b3811cdd82ad8ef954886a76dccbdc7c465277ed4b8bec051,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260324105949569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e94c9e43c85bb55d6d45111d97033f81,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a4bd2e15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046,PodSandboxId:741f798f6f138e14978e159f4df096c1682e9eefd26acd95c73f6e45ee08117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260324094702345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95268b823ab24992388d3d2e5120ca4e,},Annotations:map[string]string{io.
kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679,PodSandboxId:c2c437ffdf74016d1129eafa01adade139da217a95d24b78ae3170a5c9c4e0ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260324090830620,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b774d6042fd8fc65469594fd0dce96,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 160782ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94f307a1-e165-4f4f-9109-233473242b28 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.076930359Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d31401f-67e8-4982-9192-e761bf380873 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.077029362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d31401f-67e8-4982-9192-e761bf380873 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.078941654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=898925fb-1ebd-4f35-b528-ecd8e8bf428f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.079513269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261136079487228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=898925fb-1ebd-4f35-b528-ecd8e8bf428f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.080977645Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32967255-0be7-436a-9158-80fb82296f9d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.081065013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32967255-0be7-436a-9158-80fb82296f9d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:16 embed-certs-135920 crio[733]: time="2024-07-29 13:52:16.081377156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6fa41d01a3bbd092a84b63fdd76e0e4a9cfcb8095cff9783d4dda551a0cd697,PodSandboxId:14fb7737457697938a85cf65bd9088bca53d0c84788afab923b48bcc11202337,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260339582699946,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9da5631b-2e6f-49af-a4d1-47b2bc69778b,},Annotations:map[string]string{io.kubernetes.container.hash: b9f66443,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1,PodSandboxId:803e367761b0f6026783d3479b77e316b28799e211080c73d25a279f5de77ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260335567786994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rgh5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7276884-67e0-41fc-af75-2f8ba96e4c52,},Annotations:map[string]string{io.kubernetes.container.hash: c4df8208,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260328447934116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1,PodSandboxId:239ae06cab4adc1b1a940b99680def754f4049f3578cad5cd5c91761c926e9a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260327699264840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn8bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199ef7b-b5ff-4051-a
bf7-eda86a891508,},Annotations:map[string]string{io.kubernetes.container.hash: d38fafe2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260327703364173,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b
40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee,PodSandboxId:0514a8f6f2fadd61c3da0fe930a0524ba384511d750ad68b61875093716859db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260324104011387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9afbefcfde49d6
4377d69e47d176392f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879,PodSandboxId:51546de0b77e669b3811cdd82ad8ef954886a76dccbdc7c465277ed4b8bec051,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260324105949569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e94c9e43c85bb55d6d45111d97033f81,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a4bd2e15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046,PodSandboxId:741f798f6f138e14978e159f4df096c1682e9eefd26acd95c73f6e45ee08117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260324094702345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95268b823ab24992388d3d2e5120ca4e,},Annotations:map[string]string{io.
kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679,PodSandboxId:c2c437ffdf74016d1129eafa01adade139da217a95d24b78ae3170a5c9c4e0ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260324090830620,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b774d6042fd8fc65469594fd0dce96,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 160782ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32967255-0be7-436a-9158-80fb82296f9d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d6fa41d01a3bb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   14fb773745769       busybox
	77e0f82421c5b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   803e367761b0f       coredns-7db6d8ff4d-rgh5d
	197f6e7a6144c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   1325c9477fc3d       storage-provisioner
	5b08d92f67be8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   1325c9477fc3d       storage-provisioner
	646e0d1187d7e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   239ae06cab4ad       kube-proxy-sn8bc
	7ed77a408cabd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   51546de0b77e6       etcd-embed-certs-135920
	d0bbe9cda62b6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   0514a8f6f2fad       kube-controller-manager-embed-certs-135920
	ed231f7f456e5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   741f798f6f138       kube-scheduler-embed-certs-135920
	ac9187ea50de2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   c2c437ffdf740       kube-apiserver-embed-certs-135920
	
	
	==> coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58595 - 46810 "HINFO IN 5845440276659678672.3557346812183137599. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009280019s
	
	
	==> describe nodes <==
	Name:               embed-certs-135920
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-135920
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=embed-certs-135920
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_29_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:29:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-135920
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:52:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:49:30 +0000   Mon, 29 Jul 2024 13:29:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:49:30 +0000   Mon, 29 Jul 2024 13:29:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:49:30 +0000   Mon, 29 Jul 2024 13:29:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:49:30 +0000   Mon, 29 Jul 2024 13:38:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.207
	  Hostname:    embed-certs-135920
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7562c7425a849ffb5070c9c7a0b2768
	  System UUID:                c7562c74-25a8-49ff-b507-0c9c7a0b2768
	  Boot ID:                    f4437f0d-14d4-4e88-8962-a92f1b148565
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-rgh5d                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-135920                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-135920             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-135920    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-sn8bc                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-135920             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-nzn76               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-135920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-135920 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-135920 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-135920 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-135920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-135920 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node embed-certs-135920 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-135920 event: Registered Node embed-certs-135920 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-135920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-135920 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-135920 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-135920 event: Registered Node embed-certs-135920 in Controller
	
	
	==> dmesg <==
	[Jul29 13:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052072] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042303] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.164186] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.618090] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.387412] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.314639] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.063680] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058131] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.186646] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.116279] systemd-fstab-generator[689]: Ignoring "noauto" option for root device
	[  +0.312358] systemd-fstab-generator[718]: Ignoring "noauto" option for root device
	[  +4.385553] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +0.061727] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.907188] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +4.595559] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.476029] systemd-fstab-generator[1580]: Ignoring "noauto" option for root device
	[  +3.263612] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.297093] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] <==
	{"level":"info","ts":"2024-07-29T13:38:44.929361Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e55d95d7437bec44","initial-advertise-peer-urls":["https://192.168.72.207:2380"],"listen-peer-urls":["https://192.168.72.207:2380"],"advertise-client-urls":["https://192.168.72.207:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.207:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:38:44.930175Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:38:44.917784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 switched to configuration voters=(16527530959302290500)"}
	{"level":"info","ts":"2024-07-29T13:38:44.936287Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f6357764450262","local-member-id":"e55d95d7437bec44","added-peer-id":"e55d95d7437bec44","added-peer-peer-urls":["https://192.168.72.207:2380"]}
	{"level":"info","ts":"2024-07-29T13:38:44.917328Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.207:2380"}
	{"level":"info","ts":"2024-07-29T13:38:44.939312Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.207:2380"}
	{"level":"info","ts":"2024-07-29T13:38:44.93953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f6357764450262","local-member-id":"e55d95d7437bec44","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:38:44.939633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:38:45.82596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T13:38:45.826012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T13:38:45.826049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 received MsgPreVoteResp from e55d95d7437bec44 at term 2"}
	{"level":"info","ts":"2024-07-29T13:38:45.826062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T13:38:45.826068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 received MsgVoteResp from e55d95d7437bec44 at term 3"}
	{"level":"info","ts":"2024-07-29T13:38:45.826076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T13:38:45.826086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e55d95d7437bec44 elected leader e55d95d7437bec44 at term 3"}
	{"level":"info","ts":"2024-07-29T13:38:45.828533Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e55d95d7437bec44","local-member-attributes":"{Name:embed-certs-135920 ClientURLs:[https://192.168.72.207:2379]}","request-path":"/0/members/e55d95d7437bec44/attributes","cluster-id":"6f6357764450262","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:38:45.828592Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:38:45.828933Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:38:45.828964Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T13:38:45.829089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:38:45.830938Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T13:38:45.831678Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.207:2379"}
	{"level":"info","ts":"2024-07-29T13:48:45.859562Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":872}
	{"level":"info","ts":"2024-07-29T13:48:45.869775Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":872,"took":"9.35731ms","hash":351501222,"current-db-size-bytes":2592768,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2592768,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-29T13:48:45.869868Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":351501222,"revision":872,"compact-revision":-1}
	
	
	==> kernel <==
	 13:52:16 up 13 min,  0 users,  load average: 0.13, 0.18, 0.13
	Linux embed-certs-135920 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] <==
	I0729 13:46:48.197706       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:48:47.199261       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:48:47.199418       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 13:48:48.200562       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:48:48.200611       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:48:48.200620       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:48:48.200657       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:48:48.200703       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:48:48.201885       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:49:48.201237       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:49:48.201624       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:49:48.201661       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:49:48.202519       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:49:48.202622       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:49:48.203778       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:51:48.202269       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:51:48.202369       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:51:48.202380       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:51:48.204478       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:51:48.204577       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:51:48.204604       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] <==
	I0729 13:46:30.580017       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:47:00.102891       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:47:00.589705       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:47:30.108898       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:47:30.601923       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:48:00.114353       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:48:00.610294       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:48:30.119816       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:48:30.618477       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:49:00.126713       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:49:00.628649       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:49:30.132398       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:49:30.638895       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:50:00.138022       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:50:00.363347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="325.346µs"
	I0729 13:50:00.647557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 13:50:14.364342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="159.555µs"
	E0729 13:50:30.144252       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:50:30.655259       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:51:00.149623       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:51:00.663001       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:51:30.154985       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:51:30.674033       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:52:00.160586       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:52:00.681654       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] <==
	I0729 13:38:47.869659       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:38:47.880213       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.207"]
	I0729 13:38:47.921513       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:38:47.921547       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:38:47.921569       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:38:47.924176       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:38:47.924429       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:38:47.924619       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:38:47.925810       1 config.go:192] "Starting service config controller"
	I0729 13:38:47.925886       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:38:47.925968       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:38:47.925989       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:38:47.926754       1 config.go:319] "Starting node config controller"
	I0729 13:38:47.927772       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:38:48.026219       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:38:48.026311       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:38:48.028137       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] <==
	I0729 13:38:45.320257       1 serving.go:380] Generated self-signed cert in-memory
	W0729 13:38:47.145407       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 13:38:47.145497       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 13:38:47.145508       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 13:38:47.145514       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 13:38:47.200368       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 13:38:47.200462       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:38:47.207810       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 13:38:47.211706       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 13:38:47.211748       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 13:38:47.211768       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 13:38:47.312472       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:49:48 embed-certs-135920 kubelet[944]: E0729 13:49:48.362816     944 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 13:49:48 embed-certs-135920 kubelet[944]: E0729 13:49:48.362911     944 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 13:49:48 embed-certs-135920 kubelet[944]: E0729 13:49:48.363193     944 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8p2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-nzn76_kube-system(4ce279ad-65aa-47ce-9cb2-9a964d26950c): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 29 13:49:48 embed-certs-135920 kubelet[944]: E0729 13:49:48.363257     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:50:00 embed-certs-135920 kubelet[944]: E0729 13:50:00.349454     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:50:14 embed-certs-135920 kubelet[944]: E0729 13:50:14.349749     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:50:29 embed-certs-135920 kubelet[944]: E0729 13:50:29.348476     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:50:41 embed-certs-135920 kubelet[944]: E0729 13:50:41.348934     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:50:43 embed-certs-135920 kubelet[944]: E0729 13:50:43.364630     944 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:50:43 embed-certs-135920 kubelet[944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:50:43 embed-certs-135920 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:50:43 embed-certs-135920 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:50:43 embed-certs-135920 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:50:55 embed-certs-135920 kubelet[944]: E0729 13:50:55.348864     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:51:06 embed-certs-135920 kubelet[944]: E0729 13:51:06.350282     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:51:21 embed-certs-135920 kubelet[944]: E0729 13:51:21.349551     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:51:32 embed-certs-135920 kubelet[944]: E0729 13:51:32.349430     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:51:43 embed-certs-135920 kubelet[944]: E0729 13:51:43.349746     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:51:43 embed-certs-135920 kubelet[944]: E0729 13:51:43.366287     944 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:51:43 embed-certs-135920 kubelet[944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:51:43 embed-certs-135920 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:51:43 embed-certs-135920 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:51:43 embed-certs-135920 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:51:57 embed-certs-135920 kubelet[944]: E0729 13:51:57.348786     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:52:12 embed-certs-135920 kubelet[944]: E0729 13:52:12.349186     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	
	
	==> storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] <==
	I0729 13:38:48.562587       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 13:38:48.582550       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 13:38:48.582824       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 13:39:05.981368       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 13:39:05.981605       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-135920_95a59b64-6ebe-48ac-9681-e2fb4ef0b1e1!
	I0729 13:39:05.985345       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a509313c-4b5c-4823-a2c1-a8b580d2e8ee", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-135920_95a59b64-6ebe-48ac-9681-e2fb4ef0b1e1 became leader
	I0729 13:39:06.082289       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-135920_95a59b64-6ebe-48ac-9681-e2fb4ef0b1e1!
	
	
	==> storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] <==
	I0729 13:38:47.808195       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 13:38:47.814844       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-135920 -n embed-certs-135920
E0729 13:52:18.313384  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-135920 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-nzn76
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-135920 describe pod metrics-server-569cc877fc-nzn76
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-135920 describe pod metrics-server-569cc877fc-nzn76: exit status 1 (81.777512ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-nzn76" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-135920 describe pod metrics-server-569cc877fc-nzn76: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 13:43:46.439297  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:44:27.881317  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 13:45:06.009667  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:45:37.633705  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 13:52:15.626289367 +0000 UTC m=+6613.889644170
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-972693 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-972693 logs -n 25: (2.444941894s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-507612 sudo cat                              | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo find                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo crio                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-507612                                       | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-312895 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | disable-driver-mounts-312895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:30 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-135920            | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-566777             | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-972693  | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-135920                 | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-566777                  | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-924039        | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-566777 --memory=2200                     | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-972693       | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC | 29 Jul 24 13:43 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-924039             | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:34:10
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:34:10.969228  301425 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:34:10.969348  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969356  301425 out.go:304] Setting ErrFile to fd 2...
	I0729 13:34:10.969360  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969506  301425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:34:10.970007  301425 out.go:298] Setting JSON to false
	I0729 13:34:10.970908  301425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11794,"bootTime":1722248257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:34:10.970971  301425 start.go:139] virtualization: kvm guest
	I0729 13:34:10.973245  301425 out.go:177] * [old-k8s-version-924039] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:34:10.974804  301425 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:34:10.974803  301425 notify.go:220] Checking for updates...
	I0729 13:34:10.977011  301425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:34:10.978270  301425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:34:10.979473  301425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:34:10.980743  301425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:34:10.981923  301425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:34:10.983514  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:34:10.983962  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:10.984049  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:10.998985  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0729 13:34:10.999407  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:10.999928  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:10.999951  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.000306  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.000497  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.002455  301425 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 13:34:11.003702  301425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:34:11.003997  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:11.004037  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:11.018707  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I0729 13:34:11.019177  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:11.019653  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:11.019676  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.019968  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.020126  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.055819  301425 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:34:11.057085  301425 start.go:297] selected driver: kvm2
	I0729 13:34:11.057104  301425 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.057242  301425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:34:11.057967  301425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.058029  301425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:34:11.073706  301425 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:34:11.074089  301425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:34:11.074169  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:34:11.074188  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:34:11.074240  301425 start.go:340] cluster config:
	{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.074366  301425 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.076296  301425 out.go:177] * Starting "old-k8s-version-924039" primary control-plane node in "old-k8s-version-924039" cluster
	I0729 13:34:09.149068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:11.077828  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:34:11.077869  301425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:34:11.077879  301425 cache.go:56] Caching tarball of preloaded images
	I0729 13:34:11.077959  301425 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:34:11.077970  301425 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 13:34:11.078069  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:34:11.078241  301425 start.go:360] acquireMachinesLock for old-k8s-version-924039: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:34:15.229067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:18.301058  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:24.381104  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:27.453064  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:33.533067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:36.605120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:42.685075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:45.757111  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:51.837033  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:54.909068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:00.989073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:04.061125  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:10.141082  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:13.213123  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:19.293109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:22.365061  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:28.445075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:31.517094  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:37.597080  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:40.669073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:46.749070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:49.821083  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:55.901013  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:58.973149  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:05.053098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:08.125109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:14.205093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:17.277093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:23.357105  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:26.429122  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:32.509070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:35.581107  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:41.661120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:44.733129  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:50.813085  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:53.885117  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:59.965073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:03.037079  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:09.117098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:12.189049  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:15.193505  300746 start.go:364] duration metric: took 4m36.683808785s to acquireMachinesLock for "no-preload-566777"
	I0729 13:37:15.193569  300746 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:15.193577  300746 fix.go:54] fixHost starting: 
	I0729 13:37:15.193937  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:15.193976  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:15.209623  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0729 13:37:15.210158  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:15.210625  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:37:15.210646  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:15.211001  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:15.211265  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:15.211468  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:37:15.213144  300746 fix.go:112] recreateIfNeeded on no-preload-566777: state=Stopped err=<nil>
	I0729 13:37:15.213185  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	W0729 13:37:15.213349  300746 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:15.215474  300746 out.go:177] * Restarting existing kvm2 VM for "no-preload-566777" ...
	I0729 13:37:15.190804  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:15.190850  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191224  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:37:15.191257  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191494  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:37:15.193354  300705 machine.go:97] duration metric: took 4m37.425774293s to provisionDockerMachine
	I0729 13:37:15.193407  300705 fix.go:56] duration metric: took 4m37.447841932s for fixHost
	I0729 13:37:15.193419  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 4m37.447869212s
	W0729 13:37:15.193447  300705 start.go:714] error starting host: provision: host is not running
	W0729 13:37:15.193569  300705 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 13:37:15.193581  300705 start.go:729] Will try again in 5 seconds ...
	I0729 13:37:15.216957  300746 main.go:141] libmachine: (no-preload-566777) Calling .Start
	I0729 13:37:15.217120  300746 main.go:141] libmachine: (no-preload-566777) Ensuring networks are active...
	I0729 13:37:15.217761  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network default is active
	I0729 13:37:15.218067  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network mk-no-preload-566777 is active
	I0729 13:37:15.218451  300746 main.go:141] libmachine: (no-preload-566777) Getting domain xml...
	I0729 13:37:15.219134  300746 main.go:141] libmachine: (no-preload-566777) Creating domain...
	I0729 13:37:16.412301  300746 main.go:141] libmachine: (no-preload-566777) Waiting to get IP...
	I0729 13:37:16.413162  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.413576  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.413670  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.413557  302040 retry.go:31] will retry after 233.512145ms: waiting for machine to come up
	I0729 13:37:16.649335  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.649921  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.649945  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.649885  302040 retry.go:31] will retry after 328.846738ms: waiting for machine to come up
	I0729 13:37:16.980566  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.980976  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.981022  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.980926  302040 retry.go:31] will retry after 329.69915ms: waiting for machine to come up
	I0729 13:37:17.312547  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.312948  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.312977  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.312906  302040 retry.go:31] will retry after 418.810733ms: waiting for machine to come up
	I0729 13:37:17.733615  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.734042  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.734065  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.734009  302040 retry.go:31] will retry after 694.191211ms: waiting for machine to come up
	I0729 13:37:20.196079  300705 start.go:360] acquireMachinesLock for embed-certs-135920: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:37:18.429670  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:18.430024  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:18.430055  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:18.429973  302040 retry.go:31] will retry after 857.66396ms: waiting for machine to come up
	I0729 13:37:19.289078  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:19.289491  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:19.289521  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:19.289458  302040 retry.go:31] will retry after 994.340261ms: waiting for machine to come up
	I0729 13:37:20.285875  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:20.286308  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:20.286340  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:20.286263  302040 retry.go:31] will retry after 1.052380852s: waiting for machine to come up
	I0729 13:37:21.340435  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:21.340775  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:21.340821  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:21.340743  302040 retry.go:31] will retry after 1.429700498s: waiting for machine to come up
	I0729 13:37:22.772362  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:22.772754  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:22.772782  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:22.772700  302040 retry.go:31] will retry after 1.702185495s: waiting for machine to come up
	I0729 13:37:24.477636  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:24.478074  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:24.478106  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:24.478003  302040 retry.go:31] will retry after 2.649912402s: waiting for machine to come up
	I0729 13:37:27.129797  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:27.130212  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:27.130243  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:27.130159  302040 retry.go:31] will retry after 3.079887428s: waiting for machine to come up
	I0729 13:37:30.213431  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:30.213918  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:30.213958  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:30.213875  302040 retry.go:31] will retry after 3.08003223s: waiting for machine to come up
	I0729 13:37:33.297139  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.297604  300746 main.go:141] libmachine: (no-preload-566777) Found IP for machine: 192.168.61.84
	I0729 13:37:33.297627  300746 main.go:141] libmachine: (no-preload-566777) Reserving static IP address...
	I0729 13:37:33.297639  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has current primary IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.298106  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.298146  300746 main.go:141] libmachine: (no-preload-566777) Reserved static IP address: 192.168.61.84
	I0729 13:37:33.298164  300746 main.go:141] libmachine: (no-preload-566777) DBG | skip adding static IP to network mk-no-preload-566777 - found existing host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"}
	I0729 13:37:33.298178  300746 main.go:141] libmachine: (no-preload-566777) DBG | Getting to WaitForSSH function...
	I0729 13:37:33.298194  300746 main.go:141] libmachine: (no-preload-566777) Waiting for SSH to be available...
	I0729 13:37:33.300310  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300618  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.300653  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300731  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH client type: external
	I0729 13:37:33.300773  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa (-rw-------)
	I0729 13:37:33.300826  300746 main.go:141] libmachine: (no-preload-566777) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:33.300957  300746 main.go:141] libmachine: (no-preload-566777) DBG | About to run SSH command:
	I0729 13:37:33.300985  300746 main.go:141] libmachine: (no-preload-566777) DBG | exit 0
	I0729 13:37:34.861481  301044 start.go:364] duration metric: took 4m23.064160625s to acquireMachinesLock for "default-k8s-diff-port-972693"
	I0729 13:37:34.861564  301044 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:34.861576  301044 fix.go:54] fixHost starting: 
	I0729 13:37:34.862021  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:34.862055  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:34.879106  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0729 13:37:34.879506  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:34.880050  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:37:34.880077  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:34.880423  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:34.880637  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:34.880838  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:37:34.882251  301044 fix.go:112] recreateIfNeeded on default-k8s-diff-port-972693: state=Stopped err=<nil>
	I0729 13:37:34.882284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	W0729 13:37:34.882465  301044 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:34.884611  301044 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-972693" ...
	I0729 13:37:33.420745  300746 main.go:141] libmachine: (no-preload-566777) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:33.421178  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetConfigRaw
	I0729 13:37:33.421861  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.424343  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.424680  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.424710  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.425061  300746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/config.json ...
	I0729 13:37:33.425244  300746 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:33.425262  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:33.425513  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.427708  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.427961  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.427989  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.428171  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.428354  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428528  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428672  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.428933  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.429139  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.429150  300746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:33.525027  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:33.525065  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525306  300746 buildroot.go:166] provisioning hostname "no-preload-566777"
	I0729 13:37:33.525340  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525551  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.528124  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528491  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.528529  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528677  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.528865  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529025  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529144  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.529286  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.529453  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.529465  300746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-566777 && echo "no-preload-566777" | sudo tee /etc/hostname
	I0729 13:37:33.638867  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-566777
	
	I0729 13:37:33.638902  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.641406  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641730  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.641762  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641908  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.642112  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642285  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642414  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.642555  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.642727  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.642743  300746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-566777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-566777/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-566777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:33.749760  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:33.749789  300746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:33.749812  300746 buildroot.go:174] setting up certificates
	I0729 13:37:33.749821  300746 provision.go:84] configureAuth start
	I0729 13:37:33.749831  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.750114  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.752924  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753241  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.753264  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753477  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.755385  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755681  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.755701  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755840  300746 provision.go:143] copyHostCerts
	I0729 13:37:33.755904  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:33.755926  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:33.756019  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:33.756156  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:33.756169  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:33.756206  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:33.756276  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:33.756286  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:33.756317  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:33.756380  300746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.no-preload-566777 san=[127.0.0.1 192.168.61.84 localhost minikube no-preload-566777]
	I0729 13:37:34.226953  300746 provision.go:177] copyRemoteCerts
	I0729 13:37:34.227033  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:34.227066  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.229542  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229816  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.229853  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.230177  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.230314  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.230452  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.310803  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:37:34.334545  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:37:34.357908  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:34.381163  300746 provision.go:87] duration metric: took 631.325967ms to configureAuth
	I0729 13:37:34.381200  300746 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:34.381441  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:37:34.381535  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.383985  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384286  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.384312  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384473  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.384681  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384862  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384995  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.385176  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.385393  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.385414  300746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:34.640587  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:34.640615  300746 machine.go:97] duration metric: took 1.215357318s to provisionDockerMachine
	I0729 13:37:34.640628  300746 start.go:293] postStartSetup for "no-preload-566777" (driver="kvm2")
	I0729 13:37:34.640645  300746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:34.640683  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.641067  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:34.641104  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.643711  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644066  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.644097  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644215  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.644398  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.644555  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.644677  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.723215  300746 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:34.727393  300746 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:34.727425  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:34.727507  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:34.727614  300746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:34.727770  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:34.736666  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:34.759678  300746 start.go:296] duration metric: took 119.034973ms for postStartSetup
	I0729 13:37:34.759716  300746 fix.go:56] duration metric: took 19.566140877s for fixHost
	I0729 13:37:34.759748  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.762103  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762468  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.762491  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762645  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.762843  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763008  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763111  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.763229  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.763392  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.763403  300746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:34.861306  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260254.835831305
	
	I0729 13:37:34.861333  300746 fix.go:216] guest clock: 1722260254.835831305
	I0729 13:37:34.861341  300746 fix.go:229] Guest: 2024-07-29 13:37:34.835831305 +0000 UTC Remote: 2024-07-29 13:37:34.759720831 +0000 UTC m=+296.387252495 (delta=76.110474ms)
	I0729 13:37:34.861376  300746 fix.go:200] guest clock delta is within tolerance: 76.110474ms
	I0729 13:37:34.861384  300746 start.go:83] releasing machines lock for "no-preload-566777", held for 19.66783585s
	I0729 13:37:34.861413  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.861708  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:34.864181  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864534  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.864567  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864757  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865296  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865467  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865546  300746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:34.865600  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.865726  300746 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:34.865753  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.868333  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868522  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868772  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868810  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868839  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868859  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868913  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869060  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869152  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869209  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869300  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869349  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869417  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.869551  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.970978  300746 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:34.978226  300746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:35.128653  300746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:35.134619  300746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:35.134688  300746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:35.150674  300746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:35.150697  300746 start.go:495] detecting cgroup driver to use...
	I0729 13:37:35.150762  300746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:35.166545  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:35.178859  300746 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:35.178913  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:35.197133  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:35.214430  300746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:35.337707  300746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:35.467057  300746 docker.go:233] disabling docker service ...
	I0729 13:37:35.467134  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:35.480960  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:35.493850  300746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:35.629455  300746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:35.741534  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:35.754886  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:35.773243  300746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 13:37:35.773323  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.783589  300746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:35.783673  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.794150  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.805389  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.816636  300746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:35.828027  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.838467  300746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.856470  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.866773  300746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:35.876110  300746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:35.876175  300746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:35.889768  300746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:35.909971  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:36.046023  300746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:36.192169  300746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:36.192238  300746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:36.197281  300746 start.go:563] Will wait 60s for crictl version
	I0729 13:37:36.197365  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.201359  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:36.248317  300746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:36.248420  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.276247  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.306549  300746 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 13:37:34.885944  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Start
	I0729 13:37:34.886114  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring networks are active...
	I0729 13:37:34.886856  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network default is active
	I0729 13:37:34.887211  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network mk-default-k8s-diff-port-972693 is active
	I0729 13:37:34.887684  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Getting domain xml...
	I0729 13:37:34.888427  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Creating domain...
	I0729 13:37:36.147265  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting to get IP...
	I0729 13:37:36.148095  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148547  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148616  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.148516  302181 retry.go:31] will retry after 191.117257ms: waiting for machine to come up
	I0729 13:37:36.340984  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341507  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.341444  302181 retry.go:31] will retry after 285.557329ms: waiting for machine to come up
	I0729 13:37:36.629066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629670  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.629621  302181 retry.go:31] will retry after 397.294163ms: waiting for machine to come up
	I0729 13:37:36.307930  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:36.311057  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311389  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:36.311417  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311699  300746 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:36.316257  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:36.330109  300746 kubeadm.go:883] updating cluster {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:36.330268  300746 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 13:37:36.330320  300746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:36.367218  300746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 13:37:36.367250  300746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:37:36.367327  300746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.367333  300746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.367394  300746 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 13:37:36.367404  300746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.367432  300746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.367353  300746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.367412  300746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.367743  300746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.369020  300746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.369125  300746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.369150  300746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.369203  300746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.369015  300746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.369484  300746 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 13:37:36.369609  300746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.369763  300746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.560256  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.600945  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.604476  300746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 13:37:36.604539  300746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.604592  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.606566  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 13:37:36.649109  300746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 13:37:36.649210  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.649212  300746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.649328  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.696863  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.698623  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.713816  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.727059  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.764110  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.764204  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.764208  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.784479  300746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 13:37:36.784542  300746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.784558  300746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 13:37:36.784597  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.784598  300746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.784694  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.813445  300746 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 13:37:36.813491  300746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.813544  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.825275  300746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 13:37:36.825290  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 13:37:36.825392  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825463  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825327  300746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.825515  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.852786  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.852866  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:36.852822  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.852843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.852984  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:37.587824  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:37.028009  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028349  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028378  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.028295  302181 retry.go:31] will retry after 507.597159ms: waiting for machine to come up
	I0729 13:37:37.538138  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538550  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538581  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.538507  302181 retry.go:31] will retry after 508.855087ms: waiting for machine to come up
	I0729 13:37:38.049628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050241  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050277  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.050198  302181 retry.go:31] will retry after 889.089993ms: waiting for machine to come up
	I0729 13:37:38.940541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941096  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.941009  302181 retry.go:31] will retry after 891.889885ms: waiting for machine to come up
	I0729 13:37:39.834956  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835395  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:39.835341  302181 retry.go:31] will retry after 1.030799215s: waiting for machine to come up
	I0729 13:37:40.867814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868336  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868367  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:40.868283  302181 retry.go:31] will retry after 1.40369357s: waiting for machine to come up
	I0729 13:37:38.870850  300746 ssh_runner.go:235] Completed: which crictl: (2.045307778s)
	I0729 13:37:38.870925  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:38.870921  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.045429354s)
	I0729 13:37:38.870946  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 13:37:38.871001  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0: (2.018116939s)
	I0729 13:37:38.871024  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.01808875s)
	I0729 13:37:38.871054  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871083  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.018080011s)
	I0729 13:37:38.871109  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 13:37:38.871120  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871056  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 13:37:38.871166  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871151  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871234  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0: (2.018278547s)
	I0729 13:37:38.871247  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:38.871259  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 13:37:38.871304  300746 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.283446632s)
	I0729 13:37:38.871343  300746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 13:37:38.871372  300746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:38.871406  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:38.871310  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:38.939395  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:38.939419  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 13:37:38.939532  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:40.939632  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.068434649s)
	I0729 13:37:40.939669  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 13:37:40.939693  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939702  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.068259157s)
	I0729 13:37:40.939734  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 13:37:40.939761  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939794  300746 ssh_runner.go:235] Completed: which crictl: (2.068372626s)
	I0729 13:37:40.939827  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.068564103s)
	I0729 13:37:40.939843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:40.939844  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.000295325s)
	I0729 13:37:40.939847  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 13:37:40.939856  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 13:37:40.999406  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 13:37:40.999505  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:43.015187  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.075399061s)
	I0729 13:37:43.015226  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 13:37:43.015243  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.015694914s)
	I0729 13:37:43.015259  300746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:43.015279  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 13:37:43.015313  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:42.273822  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274326  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:42.274251  302181 retry.go:31] will retry after 2.255017939s: waiting for machine to come up
	I0729 13:37:44.531432  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531845  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531873  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:44.531801  302181 retry.go:31] will retry after 2.272405743s: waiting for machine to come up
	I0729 13:37:46.401061  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.385713069s)
	I0729 13:37:46.401109  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 13:37:46.401147  300746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:46.401207  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:48.358628  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.9573934s)
	I0729 13:37:48.358659  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 13:37:48.358682  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:48.358733  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:46.806043  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806654  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806681  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:46.806599  302181 retry.go:31] will retry after 2.212726673s: waiting for machine to come up
	I0729 13:37:49.022244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022732  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022770  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:49.022677  302181 retry.go:31] will retry after 3.071460325s: waiting for machine to come up
	I0729 13:37:50.216727  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.857925776s)
	I0729 13:37:50.216769  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 13:37:50.216822  300746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.216879  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.862685  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 13:37:50.862738  300746 cache_images.go:123] Successfully loaded all cached images
	I0729 13:37:50.862746  300746 cache_images.go:92] duration metric: took 14.49548231s to LoadCachedImages
	I0729 13:37:50.862763  300746 kubeadm.go:934] updating node { 192.168.61.84 8443 v1.31.0-beta.0 crio true true} ...
	I0729 13:37:50.862924  300746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-566777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:50.863021  300746 ssh_runner.go:195] Run: crio config
	I0729 13:37:50.911526  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:50.911551  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:50.911563  300746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:50.911593  300746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.84 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-566777 NodeName:no-preload-566777 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:50.911782  300746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-566777"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:50.911856  300746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 13:37:50.922091  300746 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:50.922162  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:50.931275  300746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 13:37:50.947494  300746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 13:37:50.963108  300746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 13:37:50.979666  300746 ssh_runner.go:195] Run: grep 192.168.61.84	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:50.983215  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:50.994627  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:51.117275  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:51.134412  300746 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777 for IP: 192.168.61.84
	I0729 13:37:51.134439  300746 certs.go:194] generating shared ca certs ...
	I0729 13:37:51.134461  300746 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:51.134641  300746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:51.134692  300746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:51.134703  300746 certs.go:256] generating profile certs ...
	I0729 13:37:51.134825  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/client.key
	I0729 13:37:51.134901  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key.445c667e
	I0729 13:37:51.134962  300746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key
	I0729 13:37:51.135114  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:51.135153  300746 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:51.135166  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:51.135196  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:51.135225  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:51.135256  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:51.135309  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:51.136036  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:51.169507  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:51.201916  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:51.227860  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:51.263617  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:37:51.288105  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:37:51.314837  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:51.343892  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:37:51.367328  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:51.389470  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:51.411446  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:51.433270  300746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:51.448939  300746 ssh_runner.go:195] Run: openssl version
	I0729 13:37:51.454475  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:51.465080  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469541  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469605  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.475366  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:51.485979  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:51.496382  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500511  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500571  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.505997  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:51.516733  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:51.527637  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531754  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531797  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.537237  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:51.548006  300746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:51.552581  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:51.558414  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:51.563879  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:51.569869  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:51.575800  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:37:51.581525  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:37:51.587642  300746 kubeadm.go:392] StartCluster: {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:37:51.587777  300746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:37:51.587828  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.627118  300746 cri.go:89] found id: ""
	I0729 13:37:51.627212  300746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:37:51.637686  300746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:37:51.637711  300746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:37:51.637765  300746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:37:51.647368  300746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:37:51.648291  300746 kubeconfig.go:125] found "no-preload-566777" server: "https://192.168.61.84:8443"
	I0729 13:37:51.650296  300746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:37:51.659616  300746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.84
	I0729 13:37:51.659649  300746 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:37:51.659663  300746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:37:51.659714  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.700636  300746 cri.go:89] found id: ""
	I0729 13:37:51.700703  300746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:37:51.718225  300746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:37:51.728237  300746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:37:51.728257  300746 kubeadm.go:157] found existing configuration files:
	
	I0729 13:37:51.728303  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:37:51.738280  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:37:51.738364  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:37:51.748770  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:37:51.758572  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:37:51.758649  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:37:51.769634  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.779757  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:37:51.779827  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.790745  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:37:51.801212  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:37:51.801275  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:37:51.811706  300746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:37:51.821251  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:51.933905  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.401823  301425 start.go:364] duration metric: took 3m42.323534375s to acquireMachinesLock for "old-k8s-version-924039"
	I0729 13:37:53.401902  301425 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:53.401914  301425 fix.go:54] fixHost starting: 
	I0729 13:37:53.402310  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:53.402344  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:53.421973  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0729 13:37:53.422456  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:53.423079  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:37:53.423112  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:53.423508  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:53.423734  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:37:53.423883  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetState
	I0729 13:37:53.425687  301425 fix.go:112] recreateIfNeeded on old-k8s-version-924039: state=Stopped err=<nil>
	I0729 13:37:53.425733  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	W0729 13:37:53.425902  301425 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:53.427931  301425 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-924039" ...
	I0729 13:37:52.097443  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.097870  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Found IP for machine: 192.168.50.34
	I0729 13:37:52.097904  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserving static IP address...
	I0729 13:37:52.097923  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has current primary IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.098329  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.098357  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserved static IP address: 192.168.50.34
	I0729 13:37:52.098377  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | skip adding static IP to network mk-default-k8s-diff-port-972693 - found existing host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"}
	I0729 13:37:52.098406  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for SSH to be available...
	I0729 13:37:52.098423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Getting to WaitForSSH function...
	I0729 13:37:52.100530  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.100878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.100908  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.101029  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH client type: external
	I0729 13:37:52.101062  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa (-rw-------)
	I0729 13:37:52.101106  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:52.101121  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | About to run SSH command:
	I0729 13:37:52.101145  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | exit 0
	I0729 13:37:52.225041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:52.225381  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetConfigRaw
	I0729 13:37:52.226001  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.228722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229109  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.229140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229315  301044 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/config.json ...
	I0729 13:37:52.229522  301044 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:52.229541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:52.229716  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.231823  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.232181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.232446  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232613  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232758  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.232913  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.233100  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.233111  301044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:52.336948  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:52.336978  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337288  301044 buildroot.go:166] provisioning hostname "default-k8s-diff-port-972693"
	I0729 13:37:52.337321  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337552  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.340284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340598  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.340623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340724  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.340913  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341090  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341261  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.341419  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.341591  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.341603  301044 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-972693 && echo "default-k8s-diff-port-972693" | sudo tee /etc/hostname
	I0729 13:37:52.455264  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-972693
	
	I0729 13:37:52.455294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.457937  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458304  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.458332  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458465  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.458667  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458857  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458995  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.459170  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.459352  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.459376  301044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-972693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-972693/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-972693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:52.570543  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:52.570578  301044 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:52.570603  301044 buildroot.go:174] setting up certificates
	I0729 13:37:52.570617  301044 provision.go:84] configureAuth start
	I0729 13:37:52.570628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.570900  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.573309  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573609  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.573641  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573751  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.575826  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.576177  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576344  301044 provision.go:143] copyHostCerts
	I0729 13:37:52.576414  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:52.576483  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:52.576568  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:52.576698  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:52.576707  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:52.576728  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:52.576786  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:52.576815  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:52.576845  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:52.576902  301044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-972693 san=[127.0.0.1 192.168.50.34 default-k8s-diff-port-972693 localhost minikube]
	I0729 13:37:52.764928  301044 provision.go:177] copyRemoteCerts
	I0729 13:37:52.764988  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:52.765018  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.767540  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.767842  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.767872  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.768041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.768213  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.768362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.768474  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:52.847615  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:52.877666  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 13:37:52.901219  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:37:52.924922  301044 provision.go:87] duration metric: took 354.279838ms to configureAuth
	I0729 13:37:52.924953  301044 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:52.925157  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:37:52.925244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.927791  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.928181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.928533  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928830  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.928978  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.929208  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.929230  301044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:53.176359  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:53.176391  301044 machine.go:97] duration metric: took 946.853063ms to provisionDockerMachine
	I0729 13:37:53.176404  301044 start.go:293] postStartSetup for "default-k8s-diff-port-972693" (driver="kvm2")
	I0729 13:37:53.176419  301044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:53.176441  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.176782  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:53.176818  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.179340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.179698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179858  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.180053  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.180214  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.180336  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.259826  301044 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:53.264059  301044 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:53.264087  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:53.264155  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:53.264239  301044 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:53.264345  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:53.273954  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:53.297340  301044 start.go:296] duration metric: took 120.913486ms for postStartSetup
	I0729 13:37:53.297392  301044 fix.go:56] duration metric: took 18.435815853s for fixHost
	I0729 13:37:53.297421  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.299859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300187  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.300218  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.300576  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300755  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300932  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.301116  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:53.301314  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:53.301324  301044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:53.401628  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260273.369344581
	
	I0729 13:37:53.401671  301044 fix.go:216] guest clock: 1722260273.369344581
	I0729 13:37:53.401682  301044 fix.go:229] Guest: 2024-07-29 13:37:53.369344581 +0000 UTC Remote: 2024-07-29 13:37:53.297397345 +0000 UTC m=+281.644280810 (delta=71.947236ms)
	I0729 13:37:53.401705  301044 fix.go:200] guest clock delta is within tolerance: 71.947236ms
	I0729 13:37:53.401711  301044 start.go:83] releasing machines lock for "default-k8s-diff-port-972693", held for 18.540175489s
	I0729 13:37:53.401760  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.402061  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:53.404813  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405182  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.405207  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405359  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.405844  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406153  301044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:53.406210  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.406289  301044 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:53.406315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.409060  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409351  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409460  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.409814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.409878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409909  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409992  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410092  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.410183  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.410315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.410435  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410631  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.510289  301044 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:53.517635  301044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:53.660575  301044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:53.668128  301044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:53.668207  301044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:53.690732  301044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:53.690764  301044 start.go:495] detecting cgroup driver to use...
	I0729 13:37:53.690838  301044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:53.707461  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:53.721922  301044 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:53.722004  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:53.740941  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:53.759323  301044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:53.900344  301044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:54.065647  301044 docker.go:233] disabling docker service ...
	I0729 13:37:54.065780  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:54.082468  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:54.098283  301044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:54.213104  301044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:54.339560  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:54.360412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:54.384836  301044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:37:54.384900  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.400889  301044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:54.400980  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.416941  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.433090  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.449306  301044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:54.461742  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.477135  301044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.501431  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.519646  301044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:54.532995  301044 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:54.533074  301044 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:54.550639  301044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:54.561896  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:54.710789  301044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:54.885480  301044 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:54.885558  301044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:54.890556  301044 start.go:563] Will wait 60s for crictl version
	I0729 13:37:54.890629  301044 ssh_runner.go:195] Run: which crictl
	I0729 13:37:54.894644  301044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:54.941141  301044 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:54.941236  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:54.983380  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:55.027770  301044 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:37:53.429298  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .Start
	I0729 13:37:53.429471  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring networks are active...
	I0729 13:37:53.430263  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network default is active
	I0729 13:37:53.430649  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network mk-old-k8s-version-924039 is active
	I0729 13:37:53.431011  301425 main.go:141] libmachine: (old-k8s-version-924039) Getting domain xml...
	I0729 13:37:53.431825  301425 main.go:141] libmachine: (old-k8s-version-924039) Creating domain...
	I0729 13:37:54.749878  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting to get IP...
	I0729 13:37:54.751148  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.751716  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.751784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.751696  302377 retry.go:31] will retry after 230.330776ms: waiting for machine to come up
	I0729 13:37:54.984551  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.985138  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.985183  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.985094  302377 retry.go:31] will retry after 291.000555ms: waiting for machine to come up
	I0729 13:37:55.277730  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.278199  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.278220  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.278152  302377 retry.go:31] will retry after 360.474919ms: waiting for machine to come up
	I0729 13:37:55.640675  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.641255  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.641288  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.641207  302377 retry.go:31] will retry after 480.424143ms: waiting for machine to come up
	I0729 13:37:55.029239  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:55.032722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033225  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:55.033257  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033668  301044 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:55.038429  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:55.056198  301044 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:55.056373  301044 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:37:55.056440  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:55.100534  301044 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:37:55.100612  301044 ssh_runner.go:195] Run: which lz4
	I0729 13:37:55.105708  301044 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:37:55.110384  301044 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:37:55.110417  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:37:56.630726  301044 crio.go:462] duration metric: took 1.525047583s to copy over tarball
	I0729 13:37:56.630816  301044 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:37:53.446825  300746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.51288234s)
	I0729 13:37:53.446866  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.663105  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.740482  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.823641  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:37:53.823753  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.324001  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.824299  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.933931  300746 api_server.go:72] duration metric: took 1.11028623s to wait for apiserver process to appear ...
	I0729 13:37:54.933969  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:37:54.933996  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:54.934563  300746 api_server.go:269] stopped: https://192.168.61.84:8443/healthz: Get "https://192.168.61.84:8443/healthz": dial tcp 192.168.61.84:8443: connect: connection refused
	I0729 13:37:55.434598  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.005676  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.005719  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.005737  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.066371  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.066408  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.434268  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.439205  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.439240  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:58.934796  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.944368  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.944399  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.434576  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.443061  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:59.443098  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.934805  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.943892  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:37:59.955156  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:37:59.955185  300746 api_server.go:131] duration metric: took 5.021207326s to wait for apiserver health ...
	I0729 13:37:59.955197  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.955205  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:00.307264  300746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:37:56.123854  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.124460  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.124487  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.124433  302377 retry.go:31] will retry after 529.614291ms: waiting for machine to come up
	I0729 13:37:56.656136  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.656626  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.656657  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.656599  302377 retry.go:31] will retry after 794.429248ms: waiting for machine to come up
	I0729 13:37:57.452523  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:57.453001  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:57.453033  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:57.452952  302377 retry.go:31] will retry after 1.140583184s: waiting for machine to come up
	I0729 13:37:58.594636  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:58.595067  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:58.595109  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:58.595024  302377 retry.go:31] will retry after 894.563974ms: waiting for machine to come up
	I0729 13:37:59.491447  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:59.492094  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:59.492120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:59.491993  302377 retry.go:31] will retry after 1.145531829s: waiting for machine to come up
	I0729 13:38:00.639387  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:00.639807  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:00.639838  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:00.639754  302377 retry.go:31] will retry after 1.949675091s: waiting for machine to come up
	I0729 13:37:58.983188  301044 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.352336314s)
	I0729 13:37:58.983233  301044 crio.go:469] duration metric: took 2.352468802s to extract the tarball
	I0729 13:37:58.983245  301044 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:37:59.022539  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:59.086881  301044 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:37:59.086913  301044 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:37:59.086924  301044 kubeadm.go:934] updating node { 192.168.50.34 8444 v1.30.3 crio true true} ...
	I0729 13:37:59.087062  301044 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-972693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:59.087158  301044 ssh_runner.go:195] Run: crio config
	I0729 13:37:59.144128  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.144163  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:59.144182  301044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:59.144209  301044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.34 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-972693 NodeName:default-k8s-diff-port-972693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:59.144376  301044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.34
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-972693"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:59.144452  301044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:37:59.154648  301044 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:59.154717  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:59.164572  301044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0729 13:37:59.182967  301044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:37:59.202507  301044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0729 13:37:59.221603  301044 ssh_runner.go:195] Run: grep 192.168.50.34	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:59.226646  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:59.244199  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:59.390312  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:59.411152  301044 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693 for IP: 192.168.50.34
	I0729 13:37:59.411178  301044 certs.go:194] generating shared ca certs ...
	I0729 13:37:59.411213  301044 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:59.411421  301044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:59.411481  301044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:59.411495  301044 certs.go:256] generating profile certs ...
	I0729 13:37:59.411614  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/client.key
	I0729 13:37:59.411709  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key.0cff1f82
	I0729 13:37:59.411780  301044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key
	I0729 13:37:59.411977  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:59.412036  301044 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:59.412052  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:59.412090  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:59.412124  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:59.412156  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:59.412221  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:59.413262  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:59.450186  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:59.496339  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:59.535462  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:59.569433  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 13:37:59.602826  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:37:59.639581  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:59.672966  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:37:59.707007  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:59.741894  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:59.771364  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:59.802928  301044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:59.828730  301044 ssh_runner.go:195] Run: openssl version
	I0729 13:37:59.837356  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:59.855071  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861707  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861781  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.870815  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:59.884842  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:59.899473  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904238  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904312  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.910221  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:59.923542  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:59.936729  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943440  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943496  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.951099  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:59.964578  301044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:59.969476  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:59.975715  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:59.981719  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:59.987788  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:59.993753  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:00.000228  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:00.007898  301044 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:00.008033  301044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:00.008091  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.054999  301044 cri.go:89] found id: ""
	I0729 13:38:00.055097  301044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:00.069066  301044 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:00.069090  301044 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:00.069148  301044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:00.083486  301044 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:00.084538  301044 kubeconfig.go:125] found "default-k8s-diff-port-972693" server: "https://192.168.50.34:8444"
	I0729 13:38:00.086623  301044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:00.099514  301044 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.34
	I0729 13:38:00.099555  301044 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:00.099570  301044 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:00.099644  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.137643  301044 cri.go:89] found id: ""
	I0729 13:38:00.137726  301044 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:00.157036  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:00.168591  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:00.168614  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:00.168664  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:38:00.178379  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:00.178449  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:00.189688  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:38:00.199323  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:00.199388  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:00.209351  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.219100  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:00.219171  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.228754  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:38:00.238453  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:00.238526  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:00.248479  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:00.258717  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.377121  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.413128  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:00.424610  300746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:00.446537  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:01.601214  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:01.601265  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:01.601278  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:01.601296  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:01.601305  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:01.601312  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:38:01.601323  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:01.601332  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:01.601346  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:01.601357  300746 system_pods.go:74] duration metric: took 1.154789909s to wait for pod list to return data ...
	I0729 13:38:01.601370  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:02.057111  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:02.057149  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:02.057182  300746 node_conditions.go:105] duration metric: took 455.806302ms to run NodePressure ...
	I0729 13:38:02.057210  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.420014  300746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426444  300746 kubeadm.go:739] kubelet initialised
	I0729 13:38:02.426467  300746 kubeadm.go:740] duration metric: took 6.420611ms waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426478  300746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:02.431168  300746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.436892  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436916  300746 pod_ready.go:81] duration metric: took 5.728016ms for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.436925  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436932  300746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.443079  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443102  300746 pod_ready.go:81] duration metric: took 6.163444ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.443110  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443115  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.447945  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447964  300746 pod_ready.go:81] duration metric: took 4.843364ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.447973  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447980  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.457004  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457027  300746 pod_ready.go:81] duration metric: took 9.037058ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.457038  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457045  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.825208  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825246  300746 pod_ready.go:81] duration metric: took 368.180356ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.825259  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825268  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.225868  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.225975  300746 pod_ready.go:81] duration metric: took 400.697293ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.225993  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.226003  300746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.627568  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627605  300746 pod_ready.go:81] duration metric: took 401.589314ms for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.627618  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627628  300746 pod_ready.go:38] duration metric: took 1.201138036s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:03.627651  300746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:03.646855  300746 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:03.646893  300746 kubeadm.go:597] duration metric: took 12.009173344s to restartPrimaryControlPlane
	I0729 13:38:03.646910  300746 kubeadm.go:394] duration metric: took 12.059279913s to StartCluster
	I0729 13:38:03.646936  300746 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.647029  300746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:03.649213  300746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.649527  300746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:03.649810  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:38:03.649861  300746 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:03.649931  300746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-566777"
	I0729 13:38:03.649962  300746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-566777"
	W0729 13:38:03.649974  300746 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:03.650021  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650400  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.650428  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.650493  300746 addons.go:69] Setting default-storageclass=true in profile "no-preload-566777"
	I0729 13:38:03.650533  300746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-566777"
	I0729 13:38:03.650601  300746 addons.go:69] Setting metrics-server=true in profile "no-preload-566777"
	I0729 13:38:03.650631  300746 addons.go:234] Setting addon metrics-server=true in "no-preload-566777"
	W0729 13:38:03.650642  300746 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:03.650675  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650985  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651014  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651029  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651054  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651324  300746 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:03.652887  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:03.670088  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0729 13:38:03.670283  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0729 13:38:03.670694  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.670769  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.671418  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671423  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671437  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671440  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671755  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0729 13:38:03.671900  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.671927  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.672491  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.672515  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.672711  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.673183  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.673207  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.673468  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.673480  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.673857  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.674012  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.677726  300746 addons.go:234] Setting addon default-storageclass=true in "no-preload-566777"
	W0729 13:38:03.677746  300746 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:03.677777  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.678133  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.678151  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.692817  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0729 13:38:03.693446  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.693919  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.693945  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.694335  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.694504  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.694718  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0729 13:38:03.695225  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.695726  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.695744  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.696028  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.696154  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.696514  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.697635  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.698597  300746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:03.699466  300746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:03.700447  300746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:03.700463  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:03.700481  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.701375  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:03.701390  300746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:03.701404  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.705199  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705225  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705844  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705866  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705893  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705911  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705946  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706143  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706313  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.706471  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.706755  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.708988  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.710193  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I0729 13:38:03.710735  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.711282  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.711296  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.711684  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.712271  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.712322  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.712966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.713103  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.756710  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I0729 13:38:03.757254  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.757760  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.757784  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.758125  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.758376  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.760315  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.760577  300746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:03.760594  300746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:03.760612  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.763679  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.764208  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.764277  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.765045  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.765227  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.765386  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.765546  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.883257  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:03.905104  300746 node_ready.go:35] waiting up to 6m0s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:03.985382  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:03.985412  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:04.014094  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:04.014119  300746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:04.016390  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:04.047695  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:04.062249  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:04.062328  300746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:04.095999  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:05.473341  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4569173s)
	I0729 13:38:05.473396  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473409  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.473421  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.425688075s)
	I0729 13:38:05.473547  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473558  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474089  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.474117  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474129  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474133  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474137  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474142  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474158  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474148  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474213  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.475707  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.475738  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.475746  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.476002  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.476095  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.476124  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.490038  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.490081  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.490420  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.490440  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562064  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46596112s)
	I0729 13:38:05.562122  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562136  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.562492  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.562516  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562532  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562541  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.564397  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.564410  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.564448  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.564471  300746 addons.go:475] Verifying addon metrics-server=true in "no-preload-566777"
	I0729 13:38:05.566888  300746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:38:02.590640  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:02.591134  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:02.591162  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:02.591087  302377 retry.go:31] will retry after 1.765945358s: waiting for machine to come up
	I0729 13:38:04.358332  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:04.358934  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:04.358963  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:04.358899  302377 retry.go:31] will retry after 2.923224015s: waiting for machine to come up
	I0729 13:38:01.713425  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.33625836s)
	I0729 13:38:01.713462  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:01.941164  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.017707  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.134991  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:02.135105  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:02.636248  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.135563  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.264470  301044 api_server.go:72] duration metric: took 1.129485078s to wait for apiserver process to appear ...
	I0729 13:38:03.264512  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:03.264545  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.392570  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.392609  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.392626  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.423076  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.423120  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.764837  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.770393  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:06.770428  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.264879  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.269632  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:07.269670  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.764878  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.770291  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:38:07.781660  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:07.781691  301044 api_server.go:131] duration metric: took 4.517171532s to wait for apiserver health ...
	I0729 13:38:07.781700  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:38:07.781707  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:07.784769  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:05.568441  300746 addons.go:510] duration metric: took 1.918571396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:38:05.916109  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:07.284234  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:07.284764  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:07.284819  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:07.284694  302377 retry.go:31] will retry after 2.9786525s: waiting for machine to come up
	I0729 13:38:10.265771  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:10.266128  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:10.266161  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:10.266077  302377 retry.go:31] will retry after 5.044155966s: waiting for machine to come up
	I0729 13:38:07.786038  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:07.824838  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:07.850139  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:07.862900  301044 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:07.862932  301044 system_pods.go:61] "coredns-7db6d8ff4d-zllk5" [3ebb659a-7849-498b-a81c-54f75c8e1536] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:07.862943  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [fc5c7286-5cd4-4eeb-879e-6263f82c4164] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:07.862950  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [a3a13c0b-844d-4a5b-93a0-fb9784b4b095] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:07.862957  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4e6c469d-b2a5-4ec2-95a4-01b6ad7de347] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:07.862964  301044 system_pods.go:61] "kube-proxy-6hxkb" [42b01d8b-9a37-40d0-ac32-09e3e261f953] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:07.862979  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [2373a650-57bb-4dc3-96ab-7f6cd040c148] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:07.862985  301044 system_pods.go:61] "metrics-server-569cc877fc-dlrjb" [360087fa-273d-4ba8-a299-54678724c45e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:07.862990  301044 system_pods.go:61] "storage-provisioner" [3e3fb5ef-6761-4671-a093-8616241cd98f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:07.862996  301044 system_pods.go:74] duration metric: took 12.833023ms to wait for pod list to return data ...
	I0729 13:38:07.863007  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:07.868359  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:07.868385  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:07.868395  301044 node_conditions.go:105] duration metric: took 5.383164ms to run NodePressure ...
	I0729 13:38:07.868412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:08.166890  301044 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175546  301044 kubeadm.go:739] kubelet initialised
	I0729 13:38:08.175570  301044 kubeadm.go:740] duration metric: took 8.646638ms waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175588  301044 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.186944  301044 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.194446  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194479  301044 pod_ready.go:81] duration metric: took 7.500494ms for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.194487  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194495  301044 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.202341  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202366  301044 pod_ready.go:81] duration metric: took 7.863125ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.202380  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202388  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.209017  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209041  301044 pod_ready.go:81] duration metric: took 6.646309ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.209051  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209057  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.256503  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256530  301044 pod_ready.go:81] duration metric: took 47.465005ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.256543  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256552  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652875  301044 pod_ready.go:92] pod "kube-proxy-6hxkb" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:08.652901  301044 pod_ready.go:81] duration metric: took 396.340654ms for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652912  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.658352  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:08.411629  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:08.908602  300746 node_ready.go:49] node "no-preload-566777" has status "Ready":"True"
	I0729 13:38:08.908629  300746 node_ready.go:38] duration metric: took 5.003487604s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:08.908639  300746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.914468  300746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.921796  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.313102  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313621  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has current primary IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313650  301425 main.go:141] libmachine: (old-k8s-version-924039) Found IP for machine: 192.168.39.227
	I0729 13:38:15.313665  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserving static IP address...
	I0729 13:38:15.314120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.314168  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | skip adding static IP to network mk-old-k8s-version-924039 - found existing host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"}
	I0729 13:38:15.314187  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserved static IP address: 192.168.39.227
	I0729 13:38:15.314205  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting for SSH to be available...
	I0729 13:38:15.314219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Getting to WaitForSSH function...
	I0729 13:38:15.316468  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316779  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.316827  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316994  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH client type: external
	I0729 13:38:15.317013  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa (-rw-------)
	I0729 13:38:15.317042  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:15.317054  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | About to run SSH command:
	I0729 13:38:15.317076  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | exit 0
	I0729 13:38:15.444818  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:15.445203  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetConfigRaw
	I0729 13:38:15.445858  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.448296  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.448784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.448834  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.449028  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:38:15.449208  301425 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:15.449226  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:15.449469  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.451695  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452017  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.452046  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.452420  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452606  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452770  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.452945  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.453151  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.453165  301425 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:15.561558  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:15.561590  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.561859  301425 buildroot.go:166] provisioning hostname "old-k8s-version-924039"
	I0729 13:38:15.561887  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.562079  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.564776  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565116  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.565157  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565286  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.565495  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565669  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565805  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.565952  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.566129  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.566140  301425 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-924039 && echo "old-k8s-version-924039" | sudo tee /etc/hostname
	I0729 13:38:15.687712  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-924039
	
	I0729 13:38:15.687744  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.690289  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690614  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.690638  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690864  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.691104  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691290  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691463  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.691649  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.691841  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.691869  301425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-924039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-924039/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-924039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:15.814102  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:15.814140  301425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:15.814190  301425 buildroot.go:174] setting up certificates
	I0729 13:38:15.814198  301425 provision.go:84] configureAuth start
	I0729 13:38:15.814210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.814521  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.817140  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817548  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.817583  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817728  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.819957  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820307  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.820335  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820476  301425 provision.go:143] copyHostCerts
	I0729 13:38:15.820529  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:15.820539  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:15.820592  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:15.820685  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:15.820693  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:15.820713  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:15.820772  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:15.820779  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:15.820828  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:15.820909  301425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-924039 san=[127.0.0.1 192.168.39.227 localhost minikube old-k8s-version-924039]
	I0729 13:38:15.895797  301425 provision.go:177] copyRemoteCerts
	I0729 13:38:15.895866  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:15.895898  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.898774  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899173  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.899214  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899444  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.899672  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.899882  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.900048  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.606081  300705 start.go:364] duration metric: took 56.40993179s to acquireMachinesLock for "embed-certs-135920"
	I0729 13:38:16.606131  300705 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:38:16.606139  300705 fix.go:54] fixHost starting: 
	I0729 13:38:16.606611  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:16.606652  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:16.626502  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
	I0729 13:38:16.626989  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:16.627491  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:16.627511  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:16.627897  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:16.628100  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:16.628242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:16.629856  300705 fix.go:112] recreateIfNeeded on embed-certs-135920: state=Stopped err=<nil>
	I0729 13:38:16.629879  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	W0729 13:38:16.630046  300705 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:38:16.632177  300705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-135920" ...
	I0729 13:38:12.659133  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.159457  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.159792  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.159818  301044 pod_ready.go:81] duration metric: took 7.506898395s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.159827  301044 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.633625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Start
	I0729 13:38:16.633803  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring networks are active...
	I0729 13:38:16.634580  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network default is active
	I0729 13:38:16.634947  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network mk-embed-certs-135920 is active
	I0729 13:38:16.635454  300705 main.go:141] libmachine: (embed-certs-135920) Getting domain xml...
	I0729 13:38:16.636201  300705 main.go:141] libmachine: (embed-certs-135920) Creating domain...
	I0729 13:38:15.988091  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:16.019058  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 13:38:16.047266  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:16.072992  301425 provision.go:87] duration metric: took 258.777499ms to configureAuth
	I0729 13:38:16.073029  301425 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:16.073250  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:38:16.073338  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.075801  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.076219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076350  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.076560  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076750  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076972  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.077169  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.077354  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.077369  301425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:16.357614  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:16.357650  301425 machine.go:97] duration metric: took 908.424232ms to provisionDockerMachine
	I0729 13:38:16.357666  301425 start.go:293] postStartSetup for "old-k8s-version-924039" (driver="kvm2")
	I0729 13:38:16.357680  301425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:16.357706  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.358060  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:16.358089  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.360841  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361257  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.361314  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361410  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.361645  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.361821  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.361987  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.448673  301425 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:16.453435  301425 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:16.453461  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:16.453543  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:16.453638  301425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:16.453763  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:16.464185  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:16.490358  301425 start.go:296] duration metric: took 132.675687ms for postStartSetup
	I0729 13:38:16.490422  301425 fix.go:56] duration metric: took 23.088507704s for fixHost
	I0729 13:38:16.490450  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.493249  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493571  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.493612  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493781  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.494046  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494241  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494388  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.494561  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.494759  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.494769  301425 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:16.605903  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260296.583363181
	
	I0729 13:38:16.605930  301425 fix.go:216] guest clock: 1722260296.583363181
	I0729 13:38:16.605940  301425 fix.go:229] Guest: 2024-07-29 13:38:16.583363181 +0000 UTC Remote: 2024-07-29 13:38:16.490427183 +0000 UTC m=+245.556685019 (delta=92.935998ms)
	I0729 13:38:16.605967  301425 fix.go:200] guest clock delta is within tolerance: 92.935998ms
	I0729 13:38:16.605974  301425 start.go:83] releasing machines lock for "old-k8s-version-924039", held for 23.204101255s
	I0729 13:38:16.606006  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.606296  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:16.609324  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609669  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.609701  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609826  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610328  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610516  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610589  301425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:16.610673  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.610758  301425 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:16.610786  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.613356  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613639  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613689  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.613712  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613910  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614092  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.614112  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.614122  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614287  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614307  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614449  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.614496  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614635  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614771  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.719174  301425 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:16.726348  301425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:16.880130  301425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:16.886410  301425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:16.886484  301425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:16.904120  301425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:16.904151  301425 start.go:495] detecting cgroup driver to use...
	I0729 13:38:16.904222  301425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:16.927036  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:16.947380  301425 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:16.947448  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:16.964612  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:16.979266  301425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:17.108950  301425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:17.263118  301425 docker.go:233] disabling docker service ...
	I0729 13:38:17.263192  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:17.282563  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:17.299473  301425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:17.448598  301425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:17.568025  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:17.583700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:17.603159  301425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 13:38:17.603223  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.615655  301425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:17.615728  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.628639  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.640456  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.652160  301425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:17.663864  301425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:17.675293  301425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:17.675361  301425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:17.690427  301425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:17.702163  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:17.831401  301425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:17.985760  301425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:17.985851  301425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:17.990740  301425 start.go:563] Will wait 60s for crictl version
	I0729 13:38:17.990798  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:17.994741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:18.035793  301425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:18.035889  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.065036  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.097441  301425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 13:38:13.421995  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.944090  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.933596  300746 pod_ready.go:92] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.933621  300746 pod_ready.go:81] duration metric: took 8.019124005s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.933634  300746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943434  300746 pod_ready.go:92] pod "etcd-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.943465  300746 pod_ready.go:81] duration metric: took 9.816863ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943478  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952623  300746 pod_ready.go:92] pod "kube-apiserver-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.952644  300746 pod_ready.go:81] duration metric: took 9.157998ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952653  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.956989  300746 pod_ready.go:92] pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.957010  300746 pod_ready.go:81] duration metric: took 4.350015ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.957023  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962772  300746 pod_ready.go:92] pod "kube-proxy-ql6wf" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.962796  300746 pod_ready.go:81] duration metric: took 5.763769ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962807  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318604  300746 pod_ready.go:92] pod "kube-scheduler-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:17.318632  300746 pod_ready.go:81] duration metric: took 355.816982ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318642  300746 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:18.098840  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:18.102182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102629  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:18.102665  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102925  301425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:18.107544  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:18.122039  301425 kubeadm.go:883] updating cluster {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:18.122176  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:38:18.122249  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:18.169198  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:18.169279  301425 ssh_runner.go:195] Run: which lz4
	I0729 13:38:18.173861  301425 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:18.178840  301425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:18.178881  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 13:38:19.887360  301425 crio.go:462] duration metric: took 1.713549828s to copy over tarball
	I0729 13:38:19.887450  301425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:18.167033  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:20.168009  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:17.933984  300705 main.go:141] libmachine: (embed-certs-135920) Waiting to get IP...
	I0729 13:38:17.935033  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:17.935595  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:17.935652  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:17.935560  302586 retry.go:31] will retry after 195.331915ms: waiting for machine to come up
	I0729 13:38:18.133074  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.133566  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.133592  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.133513  302586 retry.go:31] will retry after 348.993714ms: waiting for machine to come up
	I0729 13:38:18.484164  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.484746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.484771  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.484703  302586 retry.go:31] will retry after 372.899167ms: waiting for machine to come up
	I0729 13:38:18.859212  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.859721  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.859746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.859672  302586 retry.go:31] will retry after 415.38859ms: waiting for machine to come up
	I0729 13:38:19.276241  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.276785  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.276816  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.276715  302586 retry.go:31] will retry after 553.262343ms: waiting for machine to come up
	I0729 13:38:19.831475  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.831994  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.832030  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.831949  302586 retry.go:31] will retry after 579.574559ms: waiting for machine to come up
	I0729 13:38:20.412838  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:20.413273  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:20.413302  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:20.413225  302586 retry.go:31] will retry after 908.712618ms: waiting for machine to come up
	I0729 13:38:21.324197  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:21.324824  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:21.324849  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:21.324723  302586 retry.go:31] will retry after 1.4226484s: waiting for machine to come up
	I0729 13:38:19.328753  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:21.330005  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.836067  301425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.948583188s)
	I0729 13:38:22.836104  301425 crio.go:469] duration metric: took 2.948710335s to extract the tarball
	I0729 13:38:22.836114  301425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:22.878370  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:22.921339  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:22.921370  301425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:38:22.921445  301425 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.921545  301425 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.921547  301425 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 13:38:22.921633  301425 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:22.921475  301425 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.921479  301425 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923052  301425 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 13:38:22.923712  301425 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.923723  301425 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923733  301425 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.923743  301425 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.923803  301425 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.923923  301425 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.923976  301425 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.079335  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.095210  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.096664  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.109172  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.111720  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.114386  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.200545  301425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 13:38:23.200629  301425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.200698  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.203884  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 13:38:23.261424  301425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 13:38:23.261500  301425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.261528  301425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 13:38:23.261561  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.261569  301425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.261610  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.267971  301425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 13:38:23.268018  301425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.268075  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317322  301425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 13:38:23.317369  301425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.317387  301425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 13:38:23.317422  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317441  301425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.317440  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.317489  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317507  301425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 13:38:23.317530  301425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 13:38:23.317551  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.317588  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.317553  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317683  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.322770  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.432764  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 13:38:23.432833  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 13:38:23.432877  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.442661  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 13:38:23.442741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 13:38:23.442785  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 13:38:23.442825  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 13:38:23.481401  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 13:38:23.484727  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 13:38:24.057020  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:24.203622  301425 cache_images.go:92] duration metric: took 1.282232497s to LoadCachedImages
	W0729 13:38:24.203724  301425 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 13:38:24.203742  301425 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.20.0 crio true true} ...
	I0729 13:38:24.203883  301425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-924039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:24.203996  301425 ssh_runner.go:195] Run: crio config
	I0729 13:38:24.274480  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:38:24.274531  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:24.274547  301425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:24.274582  301425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-924039 NodeName:old-k8s-version-924039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 13:38:24.274784  301425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-924039"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:24.274863  301425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 13:38:24.285241  301425 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:24.285333  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:24.294677  301425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0729 13:38:24.311572  301425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:24.328768  301425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 13:38:24.346849  301425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:24.351047  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:24.364302  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:24.502947  301425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:24.524583  301425 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039 for IP: 192.168.39.227
	I0729 13:38:24.524610  301425 certs.go:194] generating shared ca certs ...
	I0729 13:38:24.524626  301425 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:24.524831  301425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:24.524889  301425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:24.524908  301425 certs.go:256] generating profile certs ...
	I0729 13:38:24.525030  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.key
	I0729 13:38:24.525090  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b
	I0729 13:38:24.525143  301425 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key
	I0729 13:38:24.525300  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:24.525345  301425 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:24.525359  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:24.525390  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:24.525416  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:24.525440  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:24.525495  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:24.526416  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:24.593901  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:24.641443  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:24.679927  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:24.740839  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 13:38:24.779899  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:38:24.814327  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:24.842166  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:38:24.868619  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:24.894053  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:24.921437  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:24.947676  301425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:24.966469  301425 ssh_runner.go:195] Run: openssl version
	I0729 13:38:24.972780  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:24.985676  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990293  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990356  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.996523  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:25.007631  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:25.018369  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022779  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022840  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.028471  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:25.039307  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:25.050190  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054731  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054799  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.060568  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:25.071531  301425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:25.076195  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:25.082194  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:25.088573  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:25.095625  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:25.101900  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:25.107797  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:25.113775  301425 kubeadm.go:392] StartCluster: {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:25.113903  301425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:25.113975  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.159804  301425 cri.go:89] found id: ""
	I0729 13:38:25.159887  301425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:25.172248  301425 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:25.172271  301425 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:25.172321  301425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:25.182852  301425 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:25.184249  301425 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:25.186246  301425 kubeconfig.go:62] /home/jenkins/minikube-integration/19341-233093/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-924039" cluster setting kubeconfig missing "old-k8s-version-924039" context setting]
	I0729 13:38:25.188334  301425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:25.262355  301425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:25.274019  301425 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0729 13:38:25.274063  301425 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:25.274078  301425 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:25.274141  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.311295  301425 cri.go:89] found id: ""
	I0729 13:38:25.311365  301425 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:25.330380  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:25.343607  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:25.343651  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:25.343709  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:25.356979  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:25.357048  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:25.370453  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:25.386234  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:25.386308  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:25.403905  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.413906  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:25.414011  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.431532  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:25.448250  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:25.448325  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:25.459773  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:25.469841  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:25.584845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:22.667857  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:24.668022  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.748882  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:22.749346  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:22.749368  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:22.749292  302586 retry.go:31] will retry after 1.460248931s: waiting for machine to come up
	I0729 13:38:24.212019  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:24.212538  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:24.212567  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:24.212479  302586 retry.go:31] will retry after 1.462429402s: waiting for machine to come up
	I0729 13:38:25.676972  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:25.677407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:25.677429  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:25.677368  302586 retry.go:31] will retry after 2.551129627s: waiting for machine to come up
	I0729 13:38:23.826435  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:25.826981  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.325176  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:26.367294  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.618571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.775377  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.860948  301425 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:26.861038  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.361227  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.362003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.861172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.361165  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.861469  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.361306  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.861442  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.167961  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:29.667405  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.230763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:28.231276  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:28.231299  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:28.231239  302586 retry.go:31] will retry after 2.333059097s: waiting for machine to come up
	I0729 13:38:30.566386  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:30.566786  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:30.566815  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:30.566733  302586 retry.go:31] will retry after 3.717362174s: waiting for machine to come up
	I0729 13:38:30.326143  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:32.825635  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:31.361866  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:31.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.361776  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.862004  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.361883  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.862010  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.362013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.861958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.361390  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.861465  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.165082  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.165674  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.165885  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.288242  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288935  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has current primary IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288968  300705 main.go:141] libmachine: (embed-certs-135920) Found IP for machine: 192.168.72.207
	I0729 13:38:34.288987  300705 main.go:141] libmachine: (embed-certs-135920) Reserving static IP address...
	I0729 13:38:34.289557  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.289586  300705 main.go:141] libmachine: (embed-certs-135920) Reserved static IP address: 192.168.72.207
	I0729 13:38:34.289604  300705 main.go:141] libmachine: (embed-certs-135920) DBG | skip adding static IP to network mk-embed-certs-135920 - found existing host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"}
	I0729 13:38:34.289619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Getting to WaitForSSH function...
	I0729 13:38:34.289635  300705 main.go:141] libmachine: (embed-certs-135920) Waiting for SSH to be available...
	I0729 13:38:34.291951  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292308  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.292340  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292589  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH client type: external
	I0729 13:38:34.292619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa (-rw-------)
	I0729 13:38:34.292651  300705 main.go:141] libmachine: (embed-certs-135920) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:34.292665  300705 main.go:141] libmachine: (embed-certs-135920) DBG | About to run SSH command:
	I0729 13:38:34.292677  300705 main.go:141] libmachine: (embed-certs-135920) DBG | exit 0
	I0729 13:38:34.417738  300705 main.go:141] libmachine: (embed-certs-135920) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:34.418128  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetConfigRaw
	I0729 13:38:34.418881  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.421524  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.421875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.421911  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.422113  300705 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/config.json ...
	I0729 13:38:34.422306  300705 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:34.422325  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:34.422544  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.424658  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.425073  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425167  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.425365  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425575  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425786  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.425935  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.426155  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.426172  300705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:34.529324  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:34.529354  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529600  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:38:34.529625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.532564  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.532966  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.533001  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.533274  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.533502  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533701  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533906  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.534116  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.534339  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.534353  300705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-135920 && echo "embed-certs-135920" | sudo tee /etc/hostname
	I0729 13:38:34.651175  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-135920
	
	I0729 13:38:34.651203  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.653763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.654085  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654266  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.654460  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654647  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654838  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.655024  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.655230  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.655246  300705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-135920' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-135920/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-135920' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:34.769548  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:34.769579  300705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:34.769597  300705 buildroot.go:174] setting up certificates
	I0729 13:38:34.769605  300705 provision.go:84] configureAuth start
	I0729 13:38:34.769613  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.769910  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.772513  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.772833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.772859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.773005  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.775133  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775480  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.775506  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775607  300705 provision.go:143] copyHostCerts
	I0729 13:38:34.775671  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:34.775681  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:34.775738  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:34.775828  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:34.775836  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:34.775855  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:34.775909  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:34.775916  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:34.775932  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:34.775981  300705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.embed-certs-135920 san=[127.0.0.1 192.168.72.207 embed-certs-135920 localhost minikube]
	I0729 13:38:34.901161  300705 provision.go:177] copyRemoteCerts
	I0729 13:38:34.901230  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:34.901258  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.903730  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.904060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904245  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.904428  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.904606  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.904726  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:34.986647  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:35.010406  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:38:35.033884  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:35.057289  300705 provision.go:87] duration metric: took 287.670762ms to configureAuth
	I0729 13:38:35.057318  300705 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:35.057521  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:35.057621  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.060303  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060634  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.060667  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060840  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.061053  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061259  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061433  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.061599  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.061775  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.061792  300705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:35.344890  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:35.344923  300705 machine.go:97] duration metric: took 922.603779ms to provisionDockerMachine
	I0729 13:38:35.344936  300705 start.go:293] postStartSetup for "embed-certs-135920" (driver="kvm2")
	I0729 13:38:35.344947  300705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:35.344964  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.345304  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:35.345341  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.348029  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348420  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.348458  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348612  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.348832  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.348981  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.349112  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.431975  300705 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:35.436416  300705 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:35.436441  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:35.436522  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:35.436621  300705 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:35.436767  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:35.446166  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:35.473466  300705 start.go:296] duration metric: took 128.511199ms for postStartSetup
	I0729 13:38:35.473513  300705 fix.go:56] duration metric: took 18.867373858s for fixHost
	I0729 13:38:35.473540  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.476118  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476477  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.476504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476672  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.476877  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477093  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477241  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.477468  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.477642  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.477652  300705 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:35.577853  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260315.546644144
	
	I0729 13:38:35.577882  300705 fix.go:216] guest clock: 1722260315.546644144
	I0729 13:38:35.577892  300705 fix.go:229] Guest: 2024-07-29 13:38:35.546644144 +0000 UTC Remote: 2024-07-29 13:38:35.473518121 +0000 UTC m=+357.868969453 (delta=73.126023ms)
	I0729 13:38:35.577919  300705 fix.go:200] guest clock delta is within tolerance: 73.126023ms
	I0729 13:38:35.577926  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 18.971820448s
	I0729 13:38:35.577950  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.578260  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:35.581109  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581474  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.581507  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581707  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582287  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582451  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582562  300705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:35.582616  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.582645  300705 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:35.582673  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.585527  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585555  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585989  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586021  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586062  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586084  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586171  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586351  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586360  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586573  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586582  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586795  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586838  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.586942  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.686359  300705 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:35.692726  300705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:35.838487  300705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:35.844313  300705 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:35.844416  300705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:35.861079  300705 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:35.861103  300705 start.go:495] detecting cgroup driver to use...
	I0729 13:38:35.861178  300705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:35.880678  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:35.897996  300705 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:35.898070  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:35.915337  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:35.930990  300705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:36.039923  300705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:36.198255  300705 docker.go:233] disabling docker service ...
	I0729 13:38:36.198340  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:36.213373  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:36.227364  300705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:36.351279  300705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:36.468325  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:36.483692  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:36.503872  300705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:38:36.503945  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.515397  300705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:36.515502  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.527170  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.538668  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.550013  300705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:36.561402  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.573747  300705 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.594158  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.606047  300705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:36.616858  300705 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:36.616961  300705 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:36.633281  300705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:36.644423  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:36.779934  300705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:36.924394  300705 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:36.924483  300705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:36.929889  300705 start.go:563] Will wait 60s for crictl version
	I0729 13:38:36.929935  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:38:36.933671  300705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:36.973428  300705 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:36.973506  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.002245  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.034982  300705 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:38:37.036162  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:37.039092  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:37.039533  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039697  300705 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:37.044028  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:37.057278  300705 kubeadm.go:883] updating cluster {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:37.057398  300705 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:38:37.057504  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:37.096111  300705 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:38:37.096205  300705 ssh_runner.go:195] Run: which lz4
	I0729 13:38:37.100600  300705 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:37.104942  300705 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:37.104974  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:38:35.325849  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:37.326770  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.362042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:36.862022  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.361208  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.862020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.362115  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.861360  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.362077  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.861478  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.361278  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.861920  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.167072  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:40.667067  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:38.548671  300705 crio.go:462] duration metric: took 1.448103052s to copy over tarball
	I0729 13:38:38.548764  300705 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:40.801144  300705 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.252337742s)
	I0729 13:38:40.801177  300705 crio.go:469] duration metric: took 2.252468783s to extract the tarball
	I0729 13:38:40.801185  300705 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:40.840132  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:40.887424  300705 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:38:40.887447  300705 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:38:40.887456  300705 kubeadm.go:934] updating node { 192.168.72.207 8443 v1.30.3 crio true true} ...
	I0729 13:38:40.887583  300705 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-135920 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:40.887661  300705 ssh_runner.go:195] Run: crio config
	I0729 13:38:40.943732  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:40.943759  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:40.943771  300705 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:40.943801  300705 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.207 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-135920 NodeName:embed-certs-135920 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:38:40.943967  300705 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-135920"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:40.944048  300705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:38:40.954284  300705 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:40.954354  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:40.963877  300705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 13:38:40.981828  300705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:40.999273  300705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 13:38:41.016590  300705 ssh_runner.go:195] Run: grep 192.168.72.207	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:41.020149  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:41.031970  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:41.163779  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:41.181723  300705 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920 for IP: 192.168.72.207
	I0729 13:38:41.181746  300705 certs.go:194] generating shared ca certs ...
	I0729 13:38:41.181764  300705 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:41.181989  300705 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:41.182053  300705 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:41.182067  300705 certs.go:256] generating profile certs ...
	I0729 13:38:41.182191  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/client.key
	I0729 13:38:41.182257  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key.45ab1b35
	I0729 13:38:41.182306  300705 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key
	I0729 13:38:41.182454  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:41.182501  300705 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:41.182517  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:41.182553  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:41.182583  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:41.182607  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:41.182647  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:41.183522  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:41.239170  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:41.278086  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:41.318584  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:41.351639  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 13:38:41.389242  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:38:41.414897  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:41.439178  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:38:41.464278  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:41.488391  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:41.515271  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:41.539904  300705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:41.557036  300705 ssh_runner.go:195] Run: openssl version
	I0729 13:38:41.562935  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:41.580782  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585603  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585670  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.591504  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:41.602129  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:41.612441  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616813  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616866  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.622328  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:41.633108  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:41.643897  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648369  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648415  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.654085  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:41.665037  300705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:41.670067  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:41.676340  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:41.682386  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:41.688809  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:41.694957  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:41.700469  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:41.706471  300705 kubeadm.go:392] StartCluster: {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:41.706561  300705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:41.706617  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.746623  300705 cri.go:89] found id: ""
	I0729 13:38:41.746703  300705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:41.757101  300705 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:41.757121  300705 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:41.757174  300705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:41.766817  300705 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:41.767837  300705 kubeconfig.go:125] found "embed-certs-135920" server: "https://192.168.72.207:8443"
	I0729 13:38:41.770191  300705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:41.779930  300705 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.207
	I0729 13:38:41.779961  300705 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:41.779976  300705 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:41.780030  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.816273  300705 cri.go:89] found id: ""
	I0729 13:38:41.816350  300705 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:41.836512  300705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:41.847230  300705 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:41.847249  300705 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:41.847297  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:41.856215  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:41.856262  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:41.866646  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:41.876656  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:41.876723  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:41.886810  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.895693  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:41.895755  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.904774  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:41.915232  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:41.915301  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:41.924961  300705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:41.937051  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:42.059359  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:39.329415  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.826891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.361613  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:41.861155  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.361524  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.862047  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.361778  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.862055  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.861737  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.361194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.862019  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.326814  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:45.666203  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:42.934386  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.142119  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.221754  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.346345  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:43.346451  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.847275  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.347551  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.391680  300705 api_server.go:72] duration metric: took 1.045336573s to wait for apiserver process to appear ...
	I0729 13:38:44.391709  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:44.391735  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:44.392354  300705 api_server.go:269] stopped: https://192.168.72.207:8443/healthz: Get "https://192.168.72.207:8443/healthz": dial tcp 192.168.72.207:8443: connect: connection refused
	I0729 13:38:44.892773  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.149059  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.149101  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.149128  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.161645  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.161672  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.391878  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.396499  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.396527  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:47.892015  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.897406  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.897436  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:48.391867  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:48.395941  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:38:48.401926  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:48.401951  300705 api_server.go:131] duration metric: took 4.010234721s to wait for apiserver health ...
	I0729 13:38:48.401962  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:48.401970  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:48.403912  300705 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:44.073092  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:46.327011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:48.405332  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:48.416550  300705 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:48.439881  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:48.452435  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:48.452477  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:48.452527  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:48.452544  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:48.452556  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:48.452575  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:48.452584  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:48.452594  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:48.452604  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:48.452617  300705 system_pods.go:74] duration metric: took 12.710662ms to wait for pod list to return data ...
	I0729 13:38:48.452629  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:48.455453  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:48.455484  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:48.455497  300705 node_conditions.go:105] duration metric: took 2.858433ms to run NodePressure ...
	I0729 13:38:48.455518  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:48.791507  300705 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796191  300705 kubeadm.go:739] kubelet initialised
	I0729 13:38:48.796213  300705 kubeadm.go:740] duration metric: took 4.674843ms waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796222  300705 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:48.802395  300705 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.807224  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807247  300705 pod_ready.go:81] duration metric: took 4.825485ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.807263  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807269  300705 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.812485  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812516  300705 pod_ready.go:81] duration metric: took 5.235923ms for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.812529  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812536  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.817345  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817374  300705 pod_ready.go:81] duration metric: took 4.827847ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.817383  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817390  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.843709  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843754  300705 pod_ready.go:81] duration metric: took 26.35618ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.843775  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843783  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.243226  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243257  300705 pod_ready.go:81] duration metric: took 399.464753ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.243269  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243278  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.643370  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643399  300705 pod_ready.go:81] duration metric: took 400.112533ms for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.643410  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643416  300705 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:50.044089  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044119  300705 pod_ready.go:81] duration metric: took 400.694081ms for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:50.044128  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044135  300705 pod_ready.go:38] duration metric: took 1.247904039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:50.044153  300705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:50.055730  300705 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:50.055755  300705 kubeadm.go:597] duration metric: took 8.298625813s to restartPrimaryControlPlane
	I0729 13:38:50.055765  300705 kubeadm.go:394] duration metric: took 8.349303256s to StartCluster
	I0729 13:38:50.055785  300705 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.055869  300705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:50.057734  300705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.058013  300705 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:50.058092  300705 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:50.058165  300705 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-135920"
	I0729 13:38:50.058216  300705 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-135920"
	W0729 13:38:50.058230  300705 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:50.058217  300705 addons.go:69] Setting default-storageclass=true in profile "embed-certs-135920"
	I0729 13:38:50.058244  300705 addons.go:69] Setting metrics-server=true in profile "embed-certs-135920"
	I0729 13:38:50.058268  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058270  300705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-135920"
	I0729 13:38:50.058297  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:50.058305  300705 addons.go:234] Setting addon metrics-server=true in "embed-certs-135920"
	W0729 13:38:50.058350  300705 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:50.058416  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058719  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058746  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058763  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058766  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058732  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058835  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.061029  300705 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:50.062610  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:50.074642  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0729 13:38:50.074661  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0729 13:38:50.075119  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0729 13:38:50.075217  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075310  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075570  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075833  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.075856  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076049  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076066  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076273  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076367  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076393  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076434  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076620  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.076863  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.076912  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.076959  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.077488  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.077519  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.080392  300705 addons.go:234] Setting addon default-storageclass=true in "embed-certs-135920"
	W0729 13:38:50.080419  300705 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:50.080458  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.080872  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.080914  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.093352  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I0729 13:38:50.093981  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.094704  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.094742  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.095201  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.095452  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.095863  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0729 13:38:50.096287  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096506  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0729 13:38:50.096945  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096974  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.096991  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.097343  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.097408  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.097508  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.097529  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.099585  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.099600  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.099936  300705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:50.100730  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.100765  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.101377  300705 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.101399  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:50.101424  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.101563  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.103218  300705 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:46.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:46.862046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.362045  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.361183  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.862026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.361204  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.861490  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.361635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.861519  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.104927  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:50.104948  300705 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:50.104971  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.105309  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106036  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.106207  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106369  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.106615  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.106716  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.106817  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.108316  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.108859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108908  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.109081  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.109240  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.109354  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.119251  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0729 13:38:50.119703  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.120206  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.120235  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.120620  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.120813  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.122685  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.122898  300705 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.122910  300705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:50.122923  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.125412  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.125875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.125914  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.126140  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.126321  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.126448  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.126566  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.254664  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:50.276352  300705 node_ready.go:35] waiting up to 6m0s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:50.328315  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.412968  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.459653  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:50.459697  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:50.513203  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:50.513237  300705 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:50.576439  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.576469  300705 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:50.611994  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.701214  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701569  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.701636  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701647  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701657  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701663  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701909  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701936  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701939  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.707113  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.707130  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.707390  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.707407  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.707407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.625719  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212712139s)
	I0729 13:38:51.625766  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.625778  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626066  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.626109  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626117  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.626135  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.626143  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626412  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626430  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662030  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.049982518s)
	I0729 13:38:51.662094  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662110  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.662391  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.662759  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.662781  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662798  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.663076  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.663117  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.663126  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.663138  300705 addons.go:475] Verifying addon metrics-server=true in "embed-certs-135920"
	I0729 13:38:51.666005  300705 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 13:38:47.666568  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.167349  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.667365  300705 addons.go:510] duration metric: took 1.609276005s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 13:38:52.280219  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.826113  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.826826  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:53.327720  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:51.861510  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.362026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.861182  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.361850  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.861931  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.362035  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.861192  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.361173  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.862018  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.665875  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.666184  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.779805  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:56.780550  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:55.826349  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:58.326186  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:56.361740  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:56.862033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.362084  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.861406  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.861194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.361788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.861962  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.362043  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.862000  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.166551  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:59.167246  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.666773  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:57.780677  300705 node_ready.go:49] node "embed-certs-135920" has status "Ready":"True"
	I0729 13:38:57.780700  300705 node_ready.go:38] duration metric: took 7.504317897s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:57.780709  300705 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:57.786299  300705 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791107  300705 pod_ready.go:92] pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:57.791132  300705 pod_ready.go:81] duration metric: took 4.805712ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791143  300705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:59.806437  300705 pod_ready.go:102] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:00.296725  300705 pod_ready.go:92] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.296772  300705 pod_ready.go:81] duration metric: took 2.505622037s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.296782  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302450  300705 pod_ready.go:92] pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.302471  300705 pod_ready.go:81] duration metric: took 5.680644ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302482  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306734  300705 pod_ready.go:92] pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.306753  300705 pod_ready.go:81] duration metric: took 4.264085ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306762  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311745  300705 pod_ready.go:92] pod "kube-proxy-sn8bc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.311763  300705 pod_ready.go:81] duration metric: took 4.990061ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311773  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817465  300705 pod_ready.go:92] pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:01.817489  300705 pod_ready.go:81] duration metric: took 1.50570948s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817499  300705 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.825911  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.325485  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.362213  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:01.861107  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.361767  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.861151  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.361607  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.862013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.362032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.861858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.361611  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.862037  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.667047  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.166825  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.826817  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.326374  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:05.325891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:07.326167  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.362002  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:06.861635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.361659  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.862061  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.862083  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.361356  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.861763  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.361420  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.861822  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.666165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:10.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:08.824692  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.324207  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:09.326609  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.826082  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.362046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:11.861909  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.861834  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.361461  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.861666  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.861830  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.361141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.862003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.167800  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.665790  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:13.325286  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.826111  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:14.327217  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.826625  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.361731  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:16.862014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.361702  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.862141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.361808  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.361104  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.861123  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.361276  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.861176  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.666780  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.165629  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:18.328096  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.824426  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:19.326628  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.825705  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.362052  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:21.861150  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.361802  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.861996  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.362106  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.861135  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.361998  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.862048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.361848  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.861813  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.666434  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.666549  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:22.824988  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.825210  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.825579  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:23.826380  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:25.826544  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:27.826988  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:26.861651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:26.861733  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:26.904275  301425 cri.go:89] found id: ""
	I0729 13:39:26.904307  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.904315  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:26.904322  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:26.904387  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:26.946925  301425 cri.go:89] found id: ""
	I0729 13:39:26.946954  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.946966  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:26.946973  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:26.947036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:26.979236  301425 cri.go:89] found id: ""
	I0729 13:39:26.979267  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.979276  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:26.979282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:26.979330  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:27.022185  301425 cri.go:89] found id: ""
	I0729 13:39:27.022212  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.022220  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:27.022226  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:27.022277  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:27.055228  301425 cri.go:89] found id: ""
	I0729 13:39:27.055256  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.055266  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:27.055274  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:27.055335  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:27.088885  301425 cri.go:89] found id: ""
	I0729 13:39:27.088918  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.088926  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:27.088933  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:27.088986  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:27.123861  301425 cri.go:89] found id: ""
	I0729 13:39:27.123893  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.123902  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:27.123915  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:27.123967  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:27.157921  301425 cri.go:89] found id: ""
	I0729 13:39:27.157956  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.157964  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:27.157988  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:27.158003  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.222447  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:27.222489  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:27.265646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:27.265680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:27.317344  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:27.317388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:27.333664  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:27.333689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:27.460502  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:29.960703  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:29.974159  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:29.974235  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:30.009701  301425 cri.go:89] found id: ""
	I0729 13:39:30.009740  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.009753  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:30.009761  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:30.009822  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:30.045806  301425 cri.go:89] found id: ""
	I0729 13:39:30.045841  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.045853  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:30.045860  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:30.045924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:30.078709  301425 cri.go:89] found id: ""
	I0729 13:39:30.078738  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.078747  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:30.078753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:30.078808  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:30.112884  301425 cri.go:89] found id: ""
	I0729 13:39:30.112920  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.112932  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:30.112943  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:30.113012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:30.148160  301425 cri.go:89] found id: ""
	I0729 13:39:30.148196  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.148208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:30.148217  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:30.148285  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:30.186939  301425 cri.go:89] found id: ""
	I0729 13:39:30.186967  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.186975  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:30.186981  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:30.187039  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:30.241888  301425 cri.go:89] found id: ""
	I0729 13:39:30.241915  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.241926  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:30.241934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:30.242009  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:30.281482  301425 cri.go:89] found id: ""
	I0729 13:39:30.281510  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.281518  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:30.281527  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:30.281540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:30.321688  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:30.321730  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:30.378464  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:30.378508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:30.394109  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:30.394150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:30.474077  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:30.474101  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:30.474118  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.166322  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.166623  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.666142  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.323534  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.324750  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:30.327219  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:32.826011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.046016  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:33.059705  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:33.059795  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:33.096521  301425 cri.go:89] found id: ""
	I0729 13:39:33.096549  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.096557  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:33.096564  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:33.096621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:33.131262  301425 cri.go:89] found id: ""
	I0729 13:39:33.131295  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.131307  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:33.131314  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:33.131378  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:33.168889  301425 cri.go:89] found id: ""
	I0729 13:39:33.168915  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.168925  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:33.168932  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:33.168994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:33.205513  301425 cri.go:89] found id: ""
	I0729 13:39:33.205547  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.205558  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:33.205567  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:33.205644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:33.247051  301425 cri.go:89] found id: ""
	I0729 13:39:33.247079  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.247087  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:33.247093  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:33.247149  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:33.279541  301425 cri.go:89] found id: ""
	I0729 13:39:33.279575  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.279587  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:33.279596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:33.279659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:33.314000  301425 cri.go:89] found id: ""
	I0729 13:39:33.314034  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.314046  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:33.314054  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:33.314117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:33.351363  301425 cri.go:89] found id: ""
	I0729 13:39:33.351390  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.351401  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:33.351412  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:33.351437  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:33.413509  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:33.413547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:33.428128  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:33.428165  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:33.495430  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:33.495461  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:33.495478  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:33.574060  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:33.574098  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:34.166133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.167919  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.823668  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.824684  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.326216  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826516  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.113561  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:36.126899  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:36.126965  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:36.163363  301425 cri.go:89] found id: ""
	I0729 13:39:36.163396  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.163407  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:36.163414  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:36.163473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:36.205215  301425 cri.go:89] found id: ""
	I0729 13:39:36.205243  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.205259  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:36.205267  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:36.205331  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:36.243166  301425 cri.go:89] found id: ""
	I0729 13:39:36.243220  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.243231  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:36.243239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:36.243295  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:36.280804  301425 cri.go:89] found id: ""
	I0729 13:39:36.280836  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.280845  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:36.280852  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:36.280903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:36.317291  301425 cri.go:89] found id: ""
	I0729 13:39:36.317320  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.317330  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:36.317337  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:36.317399  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:36.358111  301425 cri.go:89] found id: ""
	I0729 13:39:36.358145  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.358156  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:36.358164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:36.358229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:36.399407  301425 cri.go:89] found id: ""
	I0729 13:39:36.399440  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.399451  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:36.399459  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:36.399525  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:36.437876  301425 cri.go:89] found id: ""
	I0729 13:39:36.437904  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.437914  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:36.437926  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:36.437942  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:36.514464  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:36.514493  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:36.514511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:36.592036  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:36.592083  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:36.647650  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:36.647691  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:36.706890  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:36.706935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.226070  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:39.239313  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:39.239373  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:39.274158  301425 cri.go:89] found id: ""
	I0729 13:39:39.274191  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.274202  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:39.274210  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:39.274286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:39.308448  301425 cri.go:89] found id: ""
	I0729 13:39:39.308484  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.308492  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:39.308499  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:39.308563  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:39.347745  301425 cri.go:89] found id: ""
	I0729 13:39:39.347782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.347791  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:39.347798  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:39.347856  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:39.380649  301425 cri.go:89] found id: ""
	I0729 13:39:39.380679  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.380688  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:39.380696  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:39.380767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:39.415076  301425 cri.go:89] found id: ""
	I0729 13:39:39.415107  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.415115  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:39.415120  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:39.415170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:39.450749  301425 cri.go:89] found id: ""
	I0729 13:39:39.450782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.450793  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:39.450801  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:39.450864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:39.482148  301425 cri.go:89] found id: ""
	I0729 13:39:39.482175  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.482184  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:39.482190  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:39.482239  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:39.518558  301425 cri.go:89] found id: ""
	I0729 13:39:39.518588  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.518597  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:39.518608  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:39.518622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:39.555753  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:39.555786  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:39.606627  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:39.606661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.620359  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:39.620388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:39.690685  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:39.690711  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:39.690728  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:38.665446  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.666445  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826801  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.325166  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:39.827390  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.326038  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.271925  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:42.284365  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:42.284447  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:42.318966  301425 cri.go:89] found id: ""
	I0729 13:39:42.318998  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.319020  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:42.319028  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:42.319111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:42.354811  301425 cri.go:89] found id: ""
	I0729 13:39:42.354840  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.354854  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:42.354862  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:42.354917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:42.402524  301425 cri.go:89] found id: ""
	I0729 13:39:42.402557  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.402569  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:42.402577  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:42.402643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:42.460954  301425 cri.go:89] found id: ""
	I0729 13:39:42.460984  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.461001  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:42.461010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:42.461063  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:42.516849  301425 cri.go:89] found id: ""
	I0729 13:39:42.516880  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.516890  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:42.516898  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:42.516963  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:42.560289  301425 cri.go:89] found id: ""
	I0729 13:39:42.560316  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.560325  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:42.560332  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:42.560397  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:42.597798  301425 cri.go:89] found id: ""
	I0729 13:39:42.597829  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.597839  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:42.597847  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:42.597912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:42.633015  301425 cri.go:89] found id: ""
	I0729 13:39:42.633043  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.633059  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:42.633068  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:42.633080  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:42.711103  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:42.711126  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:42.711141  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:42.787459  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:42.787499  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:42.828965  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:42.829002  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:42.881702  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:42.881740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:45.396462  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:45.410766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:45.410859  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:45.445886  301425 cri.go:89] found id: ""
	I0729 13:39:45.445931  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.445943  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:45.445960  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:45.446023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:45.484293  301425 cri.go:89] found id: ""
	I0729 13:39:45.484326  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.484338  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:45.484346  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:45.484410  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:45.520209  301425 cri.go:89] found id: ""
	I0729 13:39:45.520237  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.520246  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:45.520252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:45.520300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:45.555671  301425 cri.go:89] found id: ""
	I0729 13:39:45.555702  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.555711  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:45.555717  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:45.555767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:45.594578  301425 cri.go:89] found id: ""
	I0729 13:39:45.594609  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.594618  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:45.594624  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:45.594685  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:45.631777  301425 cri.go:89] found id: ""
	I0729 13:39:45.631805  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.631817  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:45.631825  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:45.631881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:45.667163  301425 cri.go:89] found id: ""
	I0729 13:39:45.667189  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.667197  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:45.667203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:45.667258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:45.703393  301425 cri.go:89] found id: ""
	I0729 13:39:45.703434  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.703443  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:45.703454  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:45.703488  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:45.774424  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:45.774452  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:45.774472  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:45.857529  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:45.857586  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:45.899737  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:45.899775  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:45.952640  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:45.952685  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:42.666728  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.165982  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.825543  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.323544  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:47.323595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:44.825237  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:46.825276  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.467705  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:48.482292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:48.482380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:48.520146  301425 cri.go:89] found id: ""
	I0729 13:39:48.520181  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.520195  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:48.520204  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:48.520282  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:48.552623  301425 cri.go:89] found id: ""
	I0729 13:39:48.552654  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.552665  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:48.552672  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:48.552734  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:48.587254  301425 cri.go:89] found id: ""
	I0729 13:39:48.587290  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.587303  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:48.587309  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:48.587368  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:48.621045  301425 cri.go:89] found id: ""
	I0729 13:39:48.621076  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.621088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:48.621096  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:48.621160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:48.654117  301425 cri.go:89] found id: ""
	I0729 13:39:48.654151  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.654163  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:48.654171  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:48.654236  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:48.693108  301425 cri.go:89] found id: ""
	I0729 13:39:48.693149  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.693166  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:48.693173  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:48.693225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:48.733000  301425 cri.go:89] found id: ""
	I0729 13:39:48.733025  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.733033  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:48.733039  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:48.733088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:48.773761  301425 cri.go:89] found id: ""
	I0729 13:39:48.773789  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.773798  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:48.773807  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:48.773822  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:48.826655  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:48.826683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:48.840335  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:48.840364  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:48.913727  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:48.913754  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:48.913774  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:48.990196  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:48.990235  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:47.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.167105  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.667165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.324027  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.324146  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.825859  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.326299  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.533333  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:51.547115  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:51.547175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:51.583247  301425 cri.go:89] found id: ""
	I0729 13:39:51.583284  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.583292  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:51.583297  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:51.583350  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:51.618925  301425 cri.go:89] found id: ""
	I0729 13:39:51.618958  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.618969  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:51.618977  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:51.619036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:51.657099  301425 cri.go:89] found id: ""
	I0729 13:39:51.657132  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.657144  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:51.657151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:51.657210  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:51.695413  301425 cri.go:89] found id: ""
	I0729 13:39:51.695459  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.695471  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:51.695480  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:51.695553  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:51.731153  301425 cri.go:89] found id: ""
	I0729 13:39:51.731186  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.731198  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:51.731206  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:51.731271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:51.765662  301425 cri.go:89] found id: ""
	I0729 13:39:51.765716  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.765730  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:51.765740  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:51.765807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:51.800442  301425 cri.go:89] found id: ""
	I0729 13:39:51.800480  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.800491  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:51.800500  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:51.800562  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:51.844516  301425 cri.go:89] found id: ""
	I0729 13:39:51.844542  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.844551  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:51.844562  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:51.844580  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:51.896139  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:51.896176  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:51.910479  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:51.910511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:51.980025  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:51.980052  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:51.980071  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:52.054674  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:52.054717  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.596468  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:54.612233  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:54.612344  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:54.653506  301425 cri.go:89] found id: ""
	I0729 13:39:54.653547  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.653558  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:54.653565  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:54.653624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:54.696964  301425 cri.go:89] found id: ""
	I0729 13:39:54.697002  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.697015  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:54.697023  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:54.697088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:54.731165  301425 cri.go:89] found id: ""
	I0729 13:39:54.731196  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.731207  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:54.731214  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:54.731279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:54.774397  301425 cri.go:89] found id: ""
	I0729 13:39:54.774426  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.774437  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:54.774444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:54.774506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:54.813365  301425 cri.go:89] found id: ""
	I0729 13:39:54.813396  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.813408  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:54.813414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:54.813480  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:54.849936  301425 cri.go:89] found id: ""
	I0729 13:39:54.849962  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.849970  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:54.849980  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:54.850042  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:54.883979  301425 cri.go:89] found id: ""
	I0729 13:39:54.884007  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.884015  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:54.884021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:54.884087  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:54.919754  301425 cri.go:89] found id: ""
	I0729 13:39:54.919779  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.919787  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:54.919796  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:54.919817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:54.973082  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:54.973117  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:54.986534  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:54.986571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:55.055473  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:55.055499  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:55.055514  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:55.138278  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:55.138322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.166585  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:56.166714  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.824525  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.824559  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.825238  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.826464  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.826664  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.683818  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:57.698992  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:57.699070  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:57.742071  301425 cri.go:89] found id: ""
	I0729 13:39:57.742103  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.742113  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:57.742121  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:57.742185  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:57.777871  301425 cri.go:89] found id: ""
	I0729 13:39:57.777902  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.777911  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:57.777918  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:57.777975  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:57.817767  301425 cri.go:89] found id: ""
	I0729 13:39:57.817798  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.817809  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:57.817817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:57.817889  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:57.855608  301425 cri.go:89] found id: ""
	I0729 13:39:57.855634  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.855644  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:57.855651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:57.855714  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:57.891219  301425 cri.go:89] found id: ""
	I0729 13:39:57.891248  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.891258  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:57.891266  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:57.891336  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:57.926000  301425 cri.go:89] found id: ""
	I0729 13:39:57.926034  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.926045  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:57.926053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:57.926116  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:57.964935  301425 cri.go:89] found id: ""
	I0729 13:39:57.964962  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.964978  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:57.964985  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:57.965051  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:58.001363  301425 cri.go:89] found id: ""
	I0729 13:39:58.001393  301425 logs.go:276] 0 containers: []
	W0729 13:39:58.001405  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:58.001417  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:58.001434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:58.057551  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:58.057598  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:58.072162  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:58.072200  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:58.140533  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:58.140565  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:58.140582  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:58.227285  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:58.227330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:00.769075  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:00.783394  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:00.783471  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:00.831260  301425 cri.go:89] found id: ""
	I0729 13:40:00.831291  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.831301  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:00.831309  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:00.831370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:00.870017  301425 cri.go:89] found id: ""
	I0729 13:40:00.870045  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.870057  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:00.870065  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:00.870127  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:00.904691  301425 cri.go:89] found id: ""
	I0729 13:40:00.904728  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.904740  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:00.904748  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:00.904828  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:00.937221  301425 cri.go:89] found id: ""
	I0729 13:40:00.937249  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.937259  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:00.937265  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:00.937329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:58.167355  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.666837  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.824755  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.324616  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.325368  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.325689  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.326062  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.977961  301425 cri.go:89] found id: ""
	I0729 13:40:00.977991  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.978002  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:00.978010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:00.978104  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:01.014239  301425 cri.go:89] found id: ""
	I0729 13:40:01.014271  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.014283  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:01.014292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:01.014362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:01.050583  301425 cri.go:89] found id: ""
	I0729 13:40:01.050615  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.050630  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:01.050637  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:01.050696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:01.091599  301425 cri.go:89] found id: ""
	I0729 13:40:01.091624  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.091634  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:01.091643  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:01.091661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:01.146404  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:01.146445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:01.160327  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:01.160358  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:01.237120  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:01.237147  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:01.237162  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:01.321539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:01.321590  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:03.865268  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:03.879648  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:03.879724  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:03.915303  301425 cri.go:89] found id: ""
	I0729 13:40:03.915329  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.915338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:03.915344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:03.915403  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:03.951982  301425 cri.go:89] found id: ""
	I0729 13:40:03.952014  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.952023  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:03.952032  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:03.952099  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:03.989751  301425 cri.go:89] found id: ""
	I0729 13:40:03.989785  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.989796  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:03.989804  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:03.989870  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:04.026934  301425 cri.go:89] found id: ""
	I0729 13:40:04.026975  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.026988  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:04.026996  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:04.027059  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:04.064135  301425 cri.go:89] found id: ""
	I0729 13:40:04.064165  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.064175  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:04.064187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:04.064256  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:04.103080  301425 cri.go:89] found id: ""
	I0729 13:40:04.103108  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.103117  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:04.103123  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:04.103172  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:04.143370  301425 cri.go:89] found id: ""
	I0729 13:40:04.143403  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.143414  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:04.143422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:04.143491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:04.179251  301425 cri.go:89] found id: ""
	I0729 13:40:04.179286  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.179298  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:04.179311  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:04.179330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:04.261058  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:04.261089  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:04.261111  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:04.342897  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:04.342935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:04.391504  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:04.391532  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:04.443064  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:04.443106  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:03.166195  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:05.166660  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.824882  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:07.324346  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.326236  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.825685  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.959346  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:06.974377  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:06.974444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:07.007797  301425 cri.go:89] found id: ""
	I0729 13:40:07.007834  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.007847  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:07.007856  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:07.007924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:07.042707  301425 cri.go:89] found id: ""
	I0729 13:40:07.042741  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.042749  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:07.042755  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:07.042807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:07.080150  301425 cri.go:89] found id: ""
	I0729 13:40:07.080185  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.080196  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:07.080203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:07.080268  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:07.115740  301425 cri.go:89] found id: ""
	I0729 13:40:07.115777  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.115788  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:07.115796  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:07.115888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:07.154110  301425 cri.go:89] found id: ""
	I0729 13:40:07.154141  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.154151  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:07.154158  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:07.154225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:07.190819  301425 cri.go:89] found id: ""
	I0729 13:40:07.190850  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.190858  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:07.190865  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:07.190917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:07.231530  301425 cri.go:89] found id: ""
	I0729 13:40:07.231560  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.231571  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:07.231579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:07.231643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:07.272211  301425 cri.go:89] found id: ""
	I0729 13:40:07.272240  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.272247  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:07.272257  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:07.272269  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.326673  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:07.326704  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:07.341255  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:07.341282  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:07.409850  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:07.409878  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:07.409895  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:07.493105  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:07.493169  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.033906  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:10.047938  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:10.048018  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:10.084224  301425 cri.go:89] found id: ""
	I0729 13:40:10.084251  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.084259  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:10.084265  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:10.084316  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:10.120362  301425 cri.go:89] found id: ""
	I0729 13:40:10.120398  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.120409  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:10.120417  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:10.120484  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:10.154128  301425 cri.go:89] found id: ""
	I0729 13:40:10.154160  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.154170  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:10.154178  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:10.154243  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:10.189539  301425 cri.go:89] found id: ""
	I0729 13:40:10.189574  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.189588  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:10.189596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:10.189661  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:10.228821  301425 cri.go:89] found id: ""
	I0729 13:40:10.228855  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.228867  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:10.228875  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:10.228950  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:10.274726  301425 cri.go:89] found id: ""
	I0729 13:40:10.274758  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.274769  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:10.274776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:10.274845  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:10.308910  301425 cri.go:89] found id: ""
	I0729 13:40:10.308945  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.308956  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:10.308964  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:10.309030  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:10.346008  301425 cri.go:89] found id: ""
	I0729 13:40:10.346044  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.346056  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:10.346069  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:10.346091  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:10.360541  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:10.360581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:10.433763  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:10.433788  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:10.433802  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:10.520366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:10.520418  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.561482  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:10.561512  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.668816  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:10.166833  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:09.823429  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.824033  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:08.826798  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.326762  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.327128  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.114858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:13.128348  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:13.128425  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:13.165329  301425 cri.go:89] found id: ""
	I0729 13:40:13.165359  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.165370  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:13.165377  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:13.165441  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:13.200104  301425 cri.go:89] found id: ""
	I0729 13:40:13.200135  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.200148  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:13.200155  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:13.200224  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:13.238632  301425 cri.go:89] found id: ""
	I0729 13:40:13.238680  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.238688  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:13.238694  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:13.238748  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:13.270859  301425 cri.go:89] found id: ""
	I0729 13:40:13.270892  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.270901  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:13.270907  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:13.270976  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:13.308346  301425 cri.go:89] found id: ""
	I0729 13:40:13.308378  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.308386  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:13.308392  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:13.308444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:13.346286  301425 cri.go:89] found id: ""
	I0729 13:40:13.346319  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.346331  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:13.346339  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:13.346412  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:13.383699  301425 cri.go:89] found id: ""
	I0729 13:40:13.383736  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.383769  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:13.383791  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:13.383850  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:13.419958  301425 cri.go:89] found id: ""
	I0729 13:40:13.420045  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.420058  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:13.420071  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:13.420094  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:13.473984  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:13.474028  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:13.488376  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:13.488410  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:13.559515  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:13.559543  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:13.559560  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:13.640528  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:13.640570  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:12.665799  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.666662  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.668217  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.323746  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.323961  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:15.826422  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.326284  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.189581  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:16.203962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:16.204052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:16.240537  301425 cri.go:89] found id: ""
	I0729 13:40:16.240572  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.240583  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:16.240591  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:16.240659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:16.277060  301425 cri.go:89] found id: ""
	I0729 13:40:16.277099  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.277112  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:16.277123  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:16.277200  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:16.313839  301425 cri.go:89] found id: ""
	I0729 13:40:16.313869  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.313878  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:16.313884  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:16.313935  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:16.351806  301425 cri.go:89] found id: ""
	I0729 13:40:16.351840  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.351850  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:16.351858  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:16.351922  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:16.387122  301425 cri.go:89] found id: ""
	I0729 13:40:16.387158  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.387169  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:16.387176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:16.387242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:16.424180  301425 cri.go:89] found id: ""
	I0729 13:40:16.424209  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.424220  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:16.424229  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:16.424292  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:16.461827  301425 cri.go:89] found id: ""
	I0729 13:40:16.461865  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.461879  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:16.461889  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:16.461946  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:16.510198  301425 cri.go:89] found id: ""
	I0729 13:40:16.510230  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.510238  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:16.510248  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:16.510264  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:16.585378  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:16.585420  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:16.629304  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:16.629337  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:16.682386  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:16.682434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:16.698405  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:16.698436  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:16.770281  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.270551  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:19.284543  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:19.284617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:19.325194  301425 cri.go:89] found id: ""
	I0729 13:40:19.325221  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.325231  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:19.325238  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:19.325298  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:19.362007  301425 cri.go:89] found id: ""
	I0729 13:40:19.362038  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.362058  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:19.362066  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:19.362196  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:19.401162  301425 cri.go:89] found id: ""
	I0729 13:40:19.401191  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.401202  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:19.401210  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:19.401274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:19.434652  301425 cri.go:89] found id: ""
	I0729 13:40:19.434689  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.434700  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:19.434709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:19.434774  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:19.470116  301425 cri.go:89] found id: ""
	I0729 13:40:19.470149  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.470157  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:19.470164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:19.470218  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:19.503593  301425 cri.go:89] found id: ""
	I0729 13:40:19.503621  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.503629  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:19.503635  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:19.503696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:19.546127  301425 cri.go:89] found id: ""
	I0729 13:40:19.546155  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.546164  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:19.546169  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:19.546217  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:19.584600  301425 cri.go:89] found id: ""
	I0729 13:40:19.584639  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.584650  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:19.584663  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:19.584681  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:19.599411  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:19.599446  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:19.665811  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.665836  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:19.665853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:19.747295  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:19.747339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:19.790476  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:19.790516  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:18.669004  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.166437  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.824788  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.327093  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:20.825470  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.827651  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.346725  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:22.361349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:22.361443  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:22.394840  301425 cri.go:89] found id: ""
	I0729 13:40:22.394870  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.394881  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:22.394889  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:22.394956  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:22.429328  301425 cri.go:89] found id: ""
	I0729 13:40:22.429356  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.429364  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:22.429370  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:22.429431  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:22.463179  301425 cri.go:89] found id: ""
	I0729 13:40:22.463206  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.463214  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:22.463220  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:22.463291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:22.497527  301425 cri.go:89] found id: ""
	I0729 13:40:22.497557  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.497565  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:22.497571  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:22.497627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:22.537607  301425 cri.go:89] found id: ""
	I0729 13:40:22.537635  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.537646  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:22.537654  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:22.537718  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:22.580658  301425 cri.go:89] found id: ""
	I0729 13:40:22.580689  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.580701  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:22.580709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:22.580775  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:22.622229  301425 cri.go:89] found id: ""
	I0729 13:40:22.622261  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.622270  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:22.622282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:22.622346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:22.660091  301425 cri.go:89] found id: ""
	I0729 13:40:22.660120  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.660129  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:22.660139  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:22.660153  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:22.715053  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:22.715090  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:22.728865  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:22.728898  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:22.805760  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:22.805785  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:22.805799  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:22.890915  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:22.890960  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:25.457272  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:25.471002  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:25.471088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:25.506190  301425 cri.go:89] found id: ""
	I0729 13:40:25.506226  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.506237  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:25.506244  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:25.506297  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:25.540957  301425 cri.go:89] found id: ""
	I0729 13:40:25.540991  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.541002  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:25.541011  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:25.541074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:25.578378  301425 cri.go:89] found id: ""
	I0729 13:40:25.578424  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.578440  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:25.578448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:25.578518  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:25.620930  301425 cri.go:89] found id: ""
	I0729 13:40:25.620962  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.620979  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:25.620987  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:25.621056  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:25.655558  301425 cri.go:89] found id: ""
	I0729 13:40:25.655589  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.655597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:25.655604  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:25.655670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:25.688810  301425 cri.go:89] found id: ""
	I0729 13:40:25.688845  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.688855  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:25.688863  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:25.688930  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:25.724384  301425 cri.go:89] found id: ""
	I0729 13:40:25.724416  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.724428  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:25.724435  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:25.724514  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:25.763174  301425 cri.go:89] found id: ""
	I0729 13:40:25.763200  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.763209  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:25.763219  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:25.763232  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:25.818517  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:25.818569  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:25.833939  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:25.833973  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:25.910487  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:25.910515  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:25.910537  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:23.167028  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.666513  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:23.824183  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.827054  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.325894  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:27.824855  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.993887  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:25.993929  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:28.536843  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:28.550097  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:28.550175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:28.592664  301425 cri.go:89] found id: ""
	I0729 13:40:28.592697  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.592709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:28.592716  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:28.592788  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:28.638299  301425 cri.go:89] found id: ""
	I0729 13:40:28.638329  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.638337  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:28.638343  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:28.638395  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:28.682410  301425 cri.go:89] found id: ""
	I0729 13:40:28.682437  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.682446  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:28.682452  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:28.682511  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:28.719402  301425 cri.go:89] found id: ""
	I0729 13:40:28.719430  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.719438  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:28.719444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:28.719504  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:28.767515  301425 cri.go:89] found id: ""
	I0729 13:40:28.767547  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.767559  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:28.767568  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:28.767633  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:28.811600  301425 cri.go:89] found id: ""
	I0729 13:40:28.811632  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.811644  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:28.811652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:28.811727  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:28.853364  301425 cri.go:89] found id: ""
	I0729 13:40:28.853397  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.853407  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:28.853414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:28.853486  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:28.890981  301425 cri.go:89] found id: ""
	I0729 13:40:28.891013  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.891024  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:28.891035  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:28.891050  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:28.944174  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:28.944213  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:28.957724  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:28.957755  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:29.026457  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:29.026479  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:29.026497  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:29.105366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:29.105415  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:27.667251  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.166789  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:28.323476  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.324242  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:32.325477  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:29.825621  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.828363  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.649374  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:31.663432  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:31.663512  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:31.702047  301425 cri.go:89] found id: ""
	I0729 13:40:31.702080  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.702088  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:31.702098  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:31.702162  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:31.738484  301425 cri.go:89] found id: ""
	I0729 13:40:31.738510  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.738518  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:31.738524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:31.738583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:31.774214  301425 cri.go:89] found id: ""
	I0729 13:40:31.774249  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.774261  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:31.774270  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:31.774339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:31.810263  301425 cri.go:89] found id: ""
	I0729 13:40:31.810293  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.810302  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:31.810307  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:31.810369  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:31.848124  301425 cri.go:89] found id: ""
	I0729 13:40:31.848153  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.848160  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:31.848167  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:31.848234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:31.885531  301425 cri.go:89] found id: ""
	I0729 13:40:31.885561  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.885571  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:31.885580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:31.885650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:31.923904  301425 cri.go:89] found id: ""
	I0729 13:40:31.923939  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.923952  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:31.923959  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:31.924029  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:31.957165  301425 cri.go:89] found id: ""
	I0729 13:40:31.957202  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.957213  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:31.957228  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:31.957248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:32.039221  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:32.039262  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.078191  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:32.078229  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:32.131871  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:32.131922  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:32.146676  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:32.146706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:32.223849  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:34.724927  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:34.739029  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:34.739113  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:34.774627  301425 cri.go:89] found id: ""
	I0729 13:40:34.774660  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.774669  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:34.774675  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:34.774743  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:34.809840  301425 cri.go:89] found id: ""
	I0729 13:40:34.809872  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.809882  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:34.809887  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:34.809940  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:34.847530  301425 cri.go:89] found id: ""
	I0729 13:40:34.847561  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.847572  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:34.847580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:34.847648  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:34.881828  301425 cri.go:89] found id: ""
	I0729 13:40:34.881856  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.881870  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:34.881876  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:34.881937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:34.918903  301425 cri.go:89] found id: ""
	I0729 13:40:34.918937  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.918949  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:34.918956  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:34.919015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:34.954714  301425 cri.go:89] found id: ""
	I0729 13:40:34.954749  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.954761  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:34.954770  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:34.954825  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:34.993433  301425 cri.go:89] found id: ""
	I0729 13:40:34.993463  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.993472  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:34.993478  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:34.993531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:35.033830  301425 cri.go:89] found id: ""
	I0729 13:40:35.033859  301425 logs.go:276] 0 containers: []
	W0729 13:40:35.033874  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:35.033884  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:35.033900  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:35.084546  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:35.084595  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:35.098807  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:35.098845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:35.182636  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:35.182662  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:35.182674  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:35.262767  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:35.262808  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.665817  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.670805  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.823905  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.824232  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.326644  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.825977  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:37.802033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:37.815633  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:37.815697  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:37.857522  301425 cri.go:89] found id: ""
	I0729 13:40:37.857552  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.857563  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:37.857571  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:37.857627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:37.897527  301425 cri.go:89] found id: ""
	I0729 13:40:37.897564  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.897575  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:37.897583  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:37.897649  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.937135  301425 cri.go:89] found id: ""
	I0729 13:40:37.937167  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.937176  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:37.937189  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:37.937255  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:37.972699  301425 cri.go:89] found id: ""
	I0729 13:40:37.972734  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.972751  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:37.972761  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:37.972933  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:38.012702  301425 cri.go:89] found id: ""
	I0729 13:40:38.012732  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.012740  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:38.012747  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:38.012832  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:38.050228  301425 cri.go:89] found id: ""
	I0729 13:40:38.050260  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.050268  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:38.050275  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:38.050329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:38.084665  301425 cri.go:89] found id: ""
	I0729 13:40:38.084693  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.084707  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:38.084715  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:38.084780  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:38.119155  301425 cri.go:89] found id: ""
	I0729 13:40:38.119200  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.119211  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:38.119222  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:38.119236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:38.170934  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:38.170968  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:38.185298  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:38.185329  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:38.256118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:38.256149  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:38.256166  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:38.337090  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:38.337127  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:40.876177  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:40.889580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:40.889655  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:40.922971  301425 cri.go:89] found id: ""
	I0729 13:40:40.923002  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.923010  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:40.923016  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:40.923074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:40.955840  301425 cri.go:89] found id: ""
	I0729 13:40:40.955872  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.955884  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:40.955891  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:40.955952  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.165718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.166160  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.168344  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:38.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.324607  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.324996  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.344232  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:40.993258  301425 cri.go:89] found id: ""
	I0729 13:40:40.993290  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.993298  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:40.993305  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:40.993357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:41.026370  301425 cri.go:89] found id: ""
	I0729 13:40:41.026398  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.026409  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:41.026416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:41.026473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:41.060538  301425 cri.go:89] found id: ""
	I0729 13:40:41.060565  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.060574  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:41.060579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:41.060630  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:41.105074  301425 cri.go:89] found id: ""
	I0729 13:40:41.105108  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.105118  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:41.105126  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:41.105193  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:41.138254  301425 cri.go:89] found id: ""
	I0729 13:40:41.138280  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.138288  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:41.138294  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:41.138342  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:41.171432  301425 cri.go:89] found id: ""
	I0729 13:40:41.171458  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.171466  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:41.171475  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:41.171487  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:41.184703  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:41.184736  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:41.265356  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:41.265392  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:41.265409  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:41.345939  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:41.345979  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:41.388819  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:41.388852  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:43.940388  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:43.955448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:43.955515  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:43.998457  301425 cri.go:89] found id: ""
	I0729 13:40:43.998494  301425 logs.go:276] 0 containers: []
	W0729 13:40:43.998506  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:43.998515  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:43.998584  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:44.038142  301425 cri.go:89] found id: ""
	I0729 13:40:44.038173  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.038185  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:44.038193  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:44.038260  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:44.077270  301425 cri.go:89] found id: ""
	I0729 13:40:44.077302  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.077313  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:44.077321  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:44.077391  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:44.117612  301425 cri.go:89] found id: ""
	I0729 13:40:44.117641  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.117661  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:44.117681  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:44.117749  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:44.152564  301425 cri.go:89] found id: ""
	I0729 13:40:44.152603  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.152615  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:44.152623  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:44.152683  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:44.188245  301425 cri.go:89] found id: ""
	I0729 13:40:44.188276  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.188288  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:44.188296  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:44.188355  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:44.224947  301425 cri.go:89] found id: ""
	I0729 13:40:44.224975  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.224983  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:44.224989  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:44.225037  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:44.264830  301425 cri.go:89] found id: ""
	I0729 13:40:44.264860  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.264867  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:44.264877  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:44.264893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:44.343145  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:44.343182  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:44.384619  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:44.384650  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:44.438195  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:44.438237  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:44.452115  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:44.452152  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:44.526586  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:43.666987  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.167143  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.825141  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.324972  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.827065  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.325488  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:47.027726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:47.041174  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:47.041242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:47.079265  301425 cri.go:89] found id: ""
	I0729 13:40:47.079295  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.079304  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:47.079313  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:47.079380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:47.119775  301425 cri.go:89] found id: ""
	I0729 13:40:47.119807  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.119820  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:47.119828  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:47.119904  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:47.155381  301425 cri.go:89] found id: ""
	I0729 13:40:47.155415  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.155426  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:47.155434  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:47.155490  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:47.195071  301425 cri.go:89] found id: ""
	I0729 13:40:47.195103  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.195111  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:47.195117  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:47.195167  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:47.229487  301425 cri.go:89] found id: ""
	I0729 13:40:47.229519  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.229531  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:47.229539  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:47.229611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:47.266159  301425 cri.go:89] found id: ""
	I0729 13:40:47.266190  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.266201  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:47.266209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:47.266269  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:47.300813  301425 cri.go:89] found id: ""
	I0729 13:40:47.300845  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.300854  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:47.300860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:47.300916  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:47.340378  301425 cri.go:89] found id: ""
	I0729 13:40:47.340412  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.340432  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:47.340444  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:47.340464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:47.395403  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:47.395444  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:47.409505  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:47.409539  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:47.481327  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:47.481349  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:47.481365  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:47.560129  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:47.560172  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.105832  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:50.121192  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:50.121264  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:50.160217  301425 cri.go:89] found id: ""
	I0729 13:40:50.160247  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.160256  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:50.160262  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:50.160313  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:50.199952  301425 cri.go:89] found id: ""
	I0729 13:40:50.199986  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.199998  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:50.200005  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:50.200065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:50.240036  301425 cri.go:89] found id: ""
	I0729 13:40:50.240069  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.240076  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:50.240083  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:50.240134  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:50.279761  301425 cri.go:89] found id: ""
	I0729 13:40:50.279788  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.279796  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:50.279802  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:50.279852  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:50.320324  301425 cri.go:89] found id: ""
	I0729 13:40:50.320350  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.320358  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:50.320364  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:50.320423  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:50.356385  301425 cri.go:89] found id: ""
	I0729 13:40:50.356413  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.356421  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:50.356427  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:50.356482  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:50.396866  301425 cri.go:89] found id: ""
	I0729 13:40:50.396900  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.396912  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:50.396919  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:50.397008  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:50.434778  301425 cri.go:89] found id: ""
	I0729 13:40:50.434812  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.434823  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:50.434836  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:50.434853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:50.447746  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:50.447776  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:50.523750  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:50.523772  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:50.523787  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:50.604206  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:50.604255  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.647414  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:50.647449  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:48.666463  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.666670  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.823595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.824045  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.826836  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:51.326943  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.327715  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.201653  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:53.215745  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:53.215814  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:53.250482  301425 cri.go:89] found id: ""
	I0729 13:40:53.250508  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.250516  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:53.250522  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:53.250583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:53.285956  301425 cri.go:89] found id: ""
	I0729 13:40:53.285988  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.285996  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:53.286002  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:53.286055  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:53.320248  301425 cri.go:89] found id: ""
	I0729 13:40:53.320281  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.320292  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:53.320300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:53.320364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:53.355155  301425 cri.go:89] found id: ""
	I0729 13:40:53.355188  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.355200  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:53.355209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:53.355271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:53.389519  301425 cri.go:89] found id: ""
	I0729 13:40:53.389549  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.389557  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:53.389564  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:53.389620  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:53.424391  301425 cri.go:89] found id: ""
	I0729 13:40:53.424419  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.424427  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:53.424433  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:53.424492  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:53.463297  301425 cri.go:89] found id: ""
	I0729 13:40:53.463331  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.463342  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:53.463350  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:53.463433  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:53.497565  301425 cri.go:89] found id: ""
	I0729 13:40:53.497593  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.497601  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:53.497610  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:53.497622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:53.548906  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:53.548948  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:53.562789  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:53.562823  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:53.635656  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:53.635679  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:53.635693  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:53.715973  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:53.716024  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:53.166007  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.166420  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.324486  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.824480  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.825127  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.326505  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:56.258726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:56.273826  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:56.273905  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:56.310881  301425 cri.go:89] found id: ""
	I0729 13:40:56.310927  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.310936  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:56.310944  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:56.310999  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:56.350104  301425 cri.go:89] found id: ""
	I0729 13:40:56.350139  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.350151  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:56.350158  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:56.350221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:56.385100  301425 cri.go:89] found id: ""
	I0729 13:40:56.385136  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.385145  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:56.385151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:56.385234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:56.421904  301425 cri.go:89] found id: ""
	I0729 13:40:56.421941  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.421953  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:56.421961  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:56.422025  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:56.457366  301425 cri.go:89] found id: ""
	I0729 13:40:56.457403  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.457414  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:56.457422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:56.457491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:56.496700  301425 cri.go:89] found id: ""
	I0729 13:40:56.496732  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.496746  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:56.496755  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:56.496844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:56.532011  301425 cri.go:89] found id: ""
	I0729 13:40:56.532039  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.532047  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:56.532053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:56.532102  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:56.567511  301425 cri.go:89] found id: ""
	I0729 13:40:56.567543  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.567554  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:56.567566  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:56.567581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:56.615875  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:56.615914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:56.629818  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:56.629862  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:56.703255  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:56.703284  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:56.703298  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:56.786466  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:56.786508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:59.328670  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:59.342993  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:59.343061  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:59.378267  301425 cri.go:89] found id: ""
	I0729 13:40:59.378301  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.378313  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:59.378321  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:59.378392  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:59.415637  301425 cri.go:89] found id: ""
	I0729 13:40:59.415669  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.415680  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:59.415687  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:59.415759  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:59.451170  301425 cri.go:89] found id: ""
	I0729 13:40:59.451204  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.451212  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:59.451219  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:59.451275  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:59.485914  301425 cri.go:89] found id: ""
	I0729 13:40:59.485948  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.485960  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:59.485975  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:59.486052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:59.523168  301425 cri.go:89] found id: ""
	I0729 13:40:59.523198  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.523208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:59.523216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:59.523274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:59.557711  301425 cri.go:89] found id: ""
	I0729 13:40:59.557746  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.557758  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:59.557766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:59.557826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:59.593387  301425 cri.go:89] found id: ""
	I0729 13:40:59.593421  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.593434  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:59.593442  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:59.593506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:59.627521  301425 cri.go:89] found id: ""
	I0729 13:40:59.627555  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.627566  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:59.627578  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:59.627597  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:59.677497  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:59.677538  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:59.692116  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:59.692150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:59.759344  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:59.759369  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:59.759382  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:59.840380  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:59.840423  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:57.166964  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:59.666395  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:01.667229  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.323708  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.323995  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.325049  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.328293  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.826414  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.380718  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:02.394436  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:02.394497  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:02.433283  301425 cri.go:89] found id: ""
	I0729 13:41:02.433313  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.433323  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:02.433332  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:02.433393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:02.467206  301425 cri.go:89] found id: ""
	I0729 13:41:02.467232  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.467241  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:02.467247  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:02.467300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:02.502743  301425 cri.go:89] found id: ""
	I0729 13:41:02.502774  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.502783  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:02.502790  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:02.502844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:02.536415  301425 cri.go:89] found id: ""
	I0729 13:41:02.536449  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.536462  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:02.536470  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:02.536527  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:02.570572  301425 cri.go:89] found id: ""
	I0729 13:41:02.570610  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.570621  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:02.570629  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:02.570702  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:02.606251  301425 cri.go:89] found id: ""
	I0729 13:41:02.606277  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.606285  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:02.606292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:02.606345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:02.644637  301425 cri.go:89] found id: ""
	I0729 13:41:02.644664  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.644675  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:02.644683  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:02.644750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:02.679493  301425 cri.go:89] found id: ""
	I0729 13:41:02.679519  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.679527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:02.679537  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:02.679553  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:02.734865  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:02.734896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:02.787929  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:02.787962  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:02.801317  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:02.801344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:02.867838  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:02.867862  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:02.867877  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:05.451323  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:05.465262  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:05.465338  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:05.499797  301425 cri.go:89] found id: ""
	I0729 13:41:05.499827  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.499837  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:05.499845  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:05.499912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:05.534363  301425 cri.go:89] found id: ""
	I0729 13:41:05.534403  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.534416  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:05.534424  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:05.534483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:05.571366  301425 cri.go:89] found id: ""
	I0729 13:41:05.571397  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.571408  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:05.571416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:05.571481  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:05.611301  301425 cri.go:89] found id: ""
	I0729 13:41:05.611335  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.611346  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:05.611355  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:05.611422  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:05.650698  301425 cri.go:89] found id: ""
	I0729 13:41:05.650738  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.650750  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:05.650758  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:05.650823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:05.686166  301425 cri.go:89] found id: ""
	I0729 13:41:05.686204  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.686216  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:05.686225  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:05.686279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:05.724567  301425 cri.go:89] found id: ""
	I0729 13:41:05.724604  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.724616  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:05.724628  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:05.724691  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:05.760401  301425 cri.go:89] found id: ""
	I0729 13:41:05.760430  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.760438  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:05.760448  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:05.760464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:05.811654  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:05.811698  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:05.827189  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:05.827226  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:05.899612  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:05.899636  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:05.899654  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:04.168533  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.665694  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:04.325443  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.824244  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.325499  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:07.326413  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.982384  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:05.982425  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.527609  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:08.542024  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:08.542086  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:08.576313  301425 cri.go:89] found id: ""
	I0729 13:41:08.576340  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.576348  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:08.576354  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:08.576406  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:08.609996  301425 cri.go:89] found id: ""
	I0729 13:41:08.610027  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.610038  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:08.610045  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:08.610111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:08.643722  301425 cri.go:89] found id: ""
	I0729 13:41:08.643750  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.643758  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:08.643765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:08.643815  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:08.679331  301425 cri.go:89] found id: ""
	I0729 13:41:08.679367  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.679378  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:08.679388  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:08.679459  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:08.718348  301425 cri.go:89] found id: ""
	I0729 13:41:08.718376  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.718384  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:08.718390  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:08.718444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:08.758086  301425 cri.go:89] found id: ""
	I0729 13:41:08.758128  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.758140  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:08.758150  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:08.758225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:08.794304  301425 cri.go:89] found id: ""
	I0729 13:41:08.794333  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.794345  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:08.794354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:08.794415  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:08.835448  301425 cri.go:89] found id: ""
	I0729 13:41:08.835477  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.835486  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:08.835495  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:08.835508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:08.923886  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:08.923931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.963921  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:08.963957  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:09.013852  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:09.013893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:09.027838  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:09.027872  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:09.097864  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:08.669271  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.165979  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:08.824724  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:10.825582  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:09.327071  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.826906  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.598762  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:11.612789  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:11.612903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:11.650029  301425 cri.go:89] found id: ""
	I0729 13:41:11.650063  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.650074  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:11.650084  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:11.650152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:11.687479  301425 cri.go:89] found id: ""
	I0729 13:41:11.687510  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.687520  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:11.687527  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:11.687593  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:11.723788  301425 cri.go:89] found id: ""
	I0729 13:41:11.723816  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.723824  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:11.723830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:11.723878  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:11.760304  301425 cri.go:89] found id: ""
	I0729 13:41:11.760341  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.760353  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:11.760361  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:11.760429  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:11.794175  301425 cri.go:89] found id: ""
	I0729 13:41:11.794202  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.794210  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:11.794216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:11.794276  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:11.830653  301425 cri.go:89] found id: ""
	I0729 13:41:11.830679  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.830689  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:11.830697  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:11.830755  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:11.869360  301425 cri.go:89] found id: ""
	I0729 13:41:11.869391  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.869403  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:11.869410  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:11.869473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:11.904164  301425 cri.go:89] found id: ""
	I0729 13:41:11.904195  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.904206  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:11.904218  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:11.904236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:11.979031  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:11.979054  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:11.979069  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:12.064215  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:12.064254  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:12.101854  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:12.101896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:12.152327  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:12.152362  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:14.668032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:14.683118  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:14.683182  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:14.722574  301425 cri.go:89] found id: ""
	I0729 13:41:14.722602  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.722612  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:14.722619  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:14.722686  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:14.759047  301425 cri.go:89] found id: ""
	I0729 13:41:14.759084  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.759094  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:14.759099  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:14.759156  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:14.794363  301425 cri.go:89] found id: ""
	I0729 13:41:14.794400  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.794411  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:14.794418  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:14.794488  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:14.831542  301425 cri.go:89] found id: ""
	I0729 13:41:14.831579  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.831586  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:14.831592  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:14.831650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:14.878710  301425 cri.go:89] found id: ""
	I0729 13:41:14.878745  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.878758  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:14.878765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:14.878824  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:14.937804  301425 cri.go:89] found id: ""
	I0729 13:41:14.937837  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.937847  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:14.937856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:14.937923  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:14.985616  301425 cri.go:89] found id: ""
	I0729 13:41:14.985649  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.985658  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:14.985665  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:14.985737  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:15.023210  301425 cri.go:89] found id: ""
	I0729 13:41:15.023248  301425 logs.go:276] 0 containers: []
	W0729 13:41:15.023261  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:15.023273  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:15.023288  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:15.072549  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:15.072587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:15.086624  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:15.086653  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:15.155391  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:15.155412  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:15.155426  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:15.237480  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:15.237535  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:13.666473  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.666831  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:13.324177  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.324419  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:14.326023  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:16.826314  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.779568  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:17.794163  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:17.794225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:17.831416  301425 cri.go:89] found id: ""
	I0729 13:41:17.831446  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.831456  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:17.831463  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:17.831519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:17.868713  301425 cri.go:89] found id: ""
	I0729 13:41:17.868740  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.868752  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:17.868758  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:17.868834  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:17.913159  301425 cri.go:89] found id: ""
	I0729 13:41:17.913200  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.913211  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:17.913221  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:17.913291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:17.947528  301425 cri.go:89] found id: ""
	I0729 13:41:17.947559  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.947567  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:17.947573  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:17.947693  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:17.982280  301425 cri.go:89] found id: ""
	I0729 13:41:17.982314  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.982323  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:17.982330  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:17.982407  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:18.023729  301425 cri.go:89] found id: ""
	I0729 13:41:18.023767  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.023776  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:18.023783  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:18.023847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:18.061594  301425 cri.go:89] found id: ""
	I0729 13:41:18.061629  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.061637  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:18.061642  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:18.061694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:18.095705  301425 cri.go:89] found id: ""
	I0729 13:41:18.095735  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.095745  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:18.095758  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:18.095778  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:18.175843  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:18.175879  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:18.222979  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:18.223015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:18.277265  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:18.277308  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:18.291002  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:18.291037  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:18.373425  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:20.873958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:20.888091  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:20.888153  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:20.925850  301425 cri.go:89] found id: ""
	I0729 13:41:20.925886  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.925894  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:20.925901  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:20.925955  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:20.962725  301425 cri.go:89] found id: ""
	I0729 13:41:20.962762  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.962774  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:20.962782  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:20.962847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:18.166668  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.166993  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.827065  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.325697  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:19.325369  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:21.326574  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.998741  301425 cri.go:89] found id: ""
	I0729 13:41:20.998778  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.998787  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:20.998794  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:20.998842  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:21.036370  301425 cri.go:89] found id: ""
	I0729 13:41:21.036401  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.036410  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:21.036417  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:21.036483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:21.071560  301425 cri.go:89] found id: ""
	I0729 13:41:21.071588  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.071597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:21.071605  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:21.071670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:21.106778  301425 cri.go:89] found id: ""
	I0729 13:41:21.106810  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.106822  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:21.106830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:21.106890  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:21.139901  301425 cri.go:89] found id: ""
	I0729 13:41:21.139926  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.139934  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:21.139940  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:21.140001  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:21.173281  301425 cri.go:89] found id: ""
	I0729 13:41:21.173312  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.173320  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:21.173330  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:21.173344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:21.225055  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:21.225095  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:21.239780  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:21.239864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:21.313460  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:21.313486  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:21.313504  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:21.398557  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:21.398599  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:23.937873  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:23.951595  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:23.951653  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:23.987177  301425 cri.go:89] found id: ""
	I0729 13:41:23.987208  301425 logs.go:276] 0 containers: []
	W0729 13:41:23.987217  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:23.987225  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:23.987324  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:24.030197  301425 cri.go:89] found id: ""
	I0729 13:41:24.030251  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.030264  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:24.030272  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:24.030339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:24.068031  301425 cri.go:89] found id: ""
	I0729 13:41:24.068061  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.068074  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:24.068081  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:24.068154  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:24.107192  301425 cri.go:89] found id: ""
	I0729 13:41:24.107221  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.107232  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:24.107239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:24.107304  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:24.143154  301425 cri.go:89] found id: ""
	I0729 13:41:24.143182  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.143190  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:24.143196  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:24.143248  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:24.181268  301425 cri.go:89] found id: ""
	I0729 13:41:24.181296  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.181304  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:24.181311  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:24.181370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:24.215248  301425 cri.go:89] found id: ""
	I0729 13:41:24.215284  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.215293  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:24.215299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:24.215363  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:24.250796  301425 cri.go:89] found id: ""
	I0729 13:41:24.250822  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.250831  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:24.250841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:24.250853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:24.305841  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:24.305883  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:24.320182  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:24.320214  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:24.389667  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:24.389690  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:24.389707  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:24.471435  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:24.471479  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:22.665718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.166432  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:22.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:24.826598  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:26.828504  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:23.825754  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.834253  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:28.329733  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:27.014508  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:27.029318  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:27.029382  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:27.064115  301425 cri.go:89] found id: ""
	I0729 13:41:27.064150  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.064161  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:27.064169  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:27.064250  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:27.099081  301425 cri.go:89] found id: ""
	I0729 13:41:27.099110  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.099123  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:27.099131  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:27.099197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:27.132475  301425 cri.go:89] found id: ""
	I0729 13:41:27.132506  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.132518  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:27.132527  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:27.132595  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:27.168924  301425 cri.go:89] found id: ""
	I0729 13:41:27.168948  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.168956  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:27.168962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:27.169015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:27.204052  301425 cri.go:89] found id: ""
	I0729 13:41:27.204082  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.204094  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:27.204109  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:27.204170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:27.238355  301425 cri.go:89] found id: ""
	I0729 13:41:27.238383  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.238391  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:27.238397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:27.238496  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:27.276104  301425 cri.go:89] found id: ""
	I0729 13:41:27.276139  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.276150  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:27.276157  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:27.276222  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:27.308612  301425 cri.go:89] found id: ""
	I0729 13:41:27.308643  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.308654  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:27.308667  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:27.308683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:27.362472  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:27.362511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:27.376349  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:27.376383  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:27.458450  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:27.458472  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:27.458486  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:27.536405  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:27.536445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:30.076285  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:30.091308  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:30.091386  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:30.138335  301425 cri.go:89] found id: ""
	I0729 13:41:30.138369  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.138381  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:30.138389  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:30.138454  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:30.176395  301425 cri.go:89] found id: ""
	I0729 13:41:30.176425  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.176435  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:30.176443  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:30.176495  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:30.214990  301425 cri.go:89] found id: ""
	I0729 13:41:30.215027  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.215035  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:30.215041  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:30.215090  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:30.252051  301425 cri.go:89] found id: ""
	I0729 13:41:30.252080  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.252088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:30.252094  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:30.252155  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:30.287210  301425 cri.go:89] found id: ""
	I0729 13:41:30.287240  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.287249  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:30.287254  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:30.287337  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:30.322813  301425 cri.go:89] found id: ""
	I0729 13:41:30.322842  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.322851  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:30.322857  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:30.322924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:30.358697  301425 cri.go:89] found id: ""
	I0729 13:41:30.358730  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.358738  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:30.358744  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:30.358804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:30.394252  301425 cri.go:89] found id: ""
	I0729 13:41:30.394283  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.394294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:30.394305  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:30.394321  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:30.446777  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:30.446820  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:30.461564  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:30.461605  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:30.537918  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:30.537942  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:30.537958  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:30.613821  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:30.613865  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:27.167654  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.666133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.323396  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:31.324718  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:30.825879  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:32.826458  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.154081  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:33.168252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:33.168353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:33.205675  301425 cri.go:89] found id: ""
	I0729 13:41:33.205708  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.205719  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:33.205727  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:33.205799  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:33.240556  301425 cri.go:89] found id: ""
	I0729 13:41:33.240582  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.240590  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:33.240596  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:33.240644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:33.276662  301425 cri.go:89] found id: ""
	I0729 13:41:33.276690  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.276698  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:33.276704  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:33.276773  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:33.318631  301425 cri.go:89] found id: ""
	I0729 13:41:33.318667  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.318677  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:33.318685  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:33.318762  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:33.354372  301425 cri.go:89] found id: ""
	I0729 13:41:33.354403  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.354412  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:33.354421  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:33.354475  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:33.389309  301425 cri.go:89] found id: ""
	I0729 13:41:33.389337  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.389346  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:33.389352  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:33.389404  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:33.423689  301425 cri.go:89] found id: ""
	I0729 13:41:33.423732  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.423745  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:33.423753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:33.423823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:33.457556  301425 cri.go:89] found id: ""
	I0729 13:41:33.457593  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.457605  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:33.457618  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:33.457634  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:33.534377  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:33.534416  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:33.579646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:33.579689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:33.629784  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:33.629819  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:33.643878  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:33.643912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:33.716446  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:32.167152  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:34.666054  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.667479  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.823726  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.824199  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.324827  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.325672  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.216598  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:36.229904  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:36.230003  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:36.263721  301425 cri.go:89] found id: ""
	I0729 13:41:36.263752  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.263771  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:36.263786  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:36.263838  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:36.297900  301425 cri.go:89] found id: ""
	I0729 13:41:36.297932  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.297950  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:36.297958  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:36.298023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:36.338037  301425 cri.go:89] found id: ""
	I0729 13:41:36.338064  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.338072  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:36.338078  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:36.338125  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:36.375334  301425 cri.go:89] found id: ""
	I0729 13:41:36.375362  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.375370  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:36.375375  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:36.375426  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:36.410760  301425 cri.go:89] found id: ""
	I0729 13:41:36.410794  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.410805  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:36.410813  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:36.410888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:36.445247  301425 cri.go:89] found id: ""
	I0729 13:41:36.445280  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.445291  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:36.445300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:36.445364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:36.487183  301425 cri.go:89] found id: ""
	I0729 13:41:36.487214  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.487221  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:36.487228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:36.487301  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:36.522407  301425 cri.go:89] found id: ""
	I0729 13:41:36.522433  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.522442  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:36.522453  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:36.522468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:36.537163  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:36.537197  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:36.608334  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:36.608361  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:36.608376  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:36.689026  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:36.689074  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:36.728580  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:36.728618  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.279605  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:39.293259  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:39.293320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:39.329070  301425 cri.go:89] found id: ""
	I0729 13:41:39.329095  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.329103  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:39.329109  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:39.329160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:39.362992  301425 cri.go:89] found id: ""
	I0729 13:41:39.363023  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.363032  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:39.363038  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:39.363100  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:39.403094  301425 cri.go:89] found id: ""
	I0729 13:41:39.403128  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.403140  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:39.403147  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:39.403201  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:39.435761  301425 cri.go:89] found id: ""
	I0729 13:41:39.435795  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.435806  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:39.435814  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:39.435881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:39.468299  301425 cri.go:89] found id: ""
	I0729 13:41:39.468332  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.468341  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:39.468349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:39.468417  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:39.505114  301425 cri.go:89] found id: ""
	I0729 13:41:39.505149  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.505162  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:39.505172  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:39.505234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:39.536942  301425 cri.go:89] found id: ""
	I0729 13:41:39.536975  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.536986  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:39.536994  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:39.537064  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:39.577394  301425 cri.go:89] found id: ""
	I0729 13:41:39.577427  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.577439  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:39.577451  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:39.577468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.631143  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:39.631184  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:39.645020  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:39.645047  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:39.718256  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:39.718283  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:39.718297  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:39.801990  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:39.802036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:39.166762  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.167646  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.824966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.825836  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.324009  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.327169  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.826091  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.347066  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:42.359902  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:42.359983  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:42.395494  301425 cri.go:89] found id: ""
	I0729 13:41:42.395529  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.395540  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:42.395548  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:42.395611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:42.429305  301425 cri.go:89] found id: ""
	I0729 13:41:42.429334  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.429343  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:42.429350  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:42.429401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:42.466902  301425 cri.go:89] found id: ""
	I0729 13:41:42.466931  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.466942  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:42.466949  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:42.467017  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:42.504582  301425 cri.go:89] found id: ""
	I0729 13:41:42.504618  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.504628  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:42.504652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:42.504717  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:42.539649  301425 cri.go:89] found id: ""
	I0729 13:41:42.539676  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.539686  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:42.539695  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:42.539758  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:42.579209  301425 cri.go:89] found id: ""
	I0729 13:41:42.579238  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.579249  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:42.579257  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:42.579320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:42.614832  301425 cri.go:89] found id: ""
	I0729 13:41:42.614861  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.614869  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:42.614874  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:42.614925  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:42.651837  301425 cri.go:89] found id: ""
	I0729 13:41:42.651865  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.651873  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:42.651883  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:42.651899  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:42.707149  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:42.707190  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:42.720990  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:42.721043  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:42.789818  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:42.789849  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:42.789867  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:42.871880  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:42.871934  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.416172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:45.428923  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:45.428994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:45.466667  301425 cri.go:89] found id: ""
	I0729 13:41:45.466699  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.466710  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:45.466717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:45.466783  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:45.501779  301425 cri.go:89] found id: ""
	I0729 13:41:45.501813  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.501825  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:45.501832  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:45.501896  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:45.537507  301425 cri.go:89] found id: ""
	I0729 13:41:45.537537  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.537547  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:45.537554  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:45.537619  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:45.575430  301425 cri.go:89] found id: ""
	I0729 13:41:45.575460  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.575467  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:45.575474  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:45.575523  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:45.613009  301425 cri.go:89] found id: ""
	I0729 13:41:45.613038  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.613047  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:45.613053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:45.613103  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:45.650734  301425 cri.go:89] found id: ""
	I0729 13:41:45.650767  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.650778  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:45.650786  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:45.650853  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:45.684301  301425 cri.go:89] found id: ""
	I0729 13:41:45.684332  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.684341  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:45.684349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:45.684416  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:45.719861  301425 cri.go:89] found id: ""
	I0729 13:41:45.719901  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.719911  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:45.719921  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:45.719936  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:45.800422  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:45.800464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.842460  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:45.842493  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:45.897388  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:45.897430  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:45.911554  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:45.911587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:41:43.665771  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.666196  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:44.325813  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:46.824774  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:43.828518  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.830106  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:48.325196  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	W0729 13:41:45.984435  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.485014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:48.498038  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:48.498110  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:48.534248  301425 cri.go:89] found id: ""
	I0729 13:41:48.534280  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.534291  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:48.534299  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:48.534362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:48.572411  301425 cri.go:89] found id: ""
	I0729 13:41:48.572445  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.572457  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:48.572465  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:48.572524  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:48.612345  301425 cri.go:89] found id: ""
	I0729 13:41:48.612373  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.612381  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:48.612387  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:48.612450  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:48.650334  301425 cri.go:89] found id: ""
	I0729 13:41:48.650385  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.650395  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:48.650401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:48.650466  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:48.687460  301425 cri.go:89] found id: ""
	I0729 13:41:48.687490  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.687501  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:48.687508  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:48.687572  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:48.735028  301425 cri.go:89] found id: ""
	I0729 13:41:48.735064  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.735077  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:48.735085  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:48.735142  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:48.771175  301425 cri.go:89] found id: ""
	I0729 13:41:48.771209  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.771220  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:48.771228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:48.771300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:48.808267  301425 cri.go:89] found id: ""
	I0729 13:41:48.808295  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.808304  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:48.808314  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:48.808328  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:48.850520  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:48.850557  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:48.902563  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:48.902612  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:48.919082  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:48.919114  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:48.999185  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.999213  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:48.999241  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:48.166020  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:49.323402  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.326596  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.825399  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:52.831823  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.579922  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:51.593149  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:51.593213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:51.626302  301425 cri.go:89] found id: ""
	I0729 13:41:51.626330  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.626338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:51.626344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:51.626393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:51.659551  301425 cri.go:89] found id: ""
	I0729 13:41:51.659578  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.659586  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:51.659592  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:51.659642  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:51.696842  301425 cri.go:89] found id: ""
	I0729 13:41:51.696868  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.696876  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:51.696882  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:51.696937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:51.737209  301425 cri.go:89] found id: ""
	I0729 13:41:51.737237  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.737246  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:51.737253  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:51.737317  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:51.772782  301425 cri.go:89] found id: ""
	I0729 13:41:51.772829  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.772842  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:51.772850  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:51.772921  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:51.806649  301425 cri.go:89] found id: ""
	I0729 13:41:51.806679  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.806690  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:51.806698  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:51.806771  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:51.848950  301425 cri.go:89] found id: ""
	I0729 13:41:51.848978  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.848989  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:51.848997  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:51.849065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:51.884875  301425 cri.go:89] found id: ""
	I0729 13:41:51.884902  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.884910  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:51.884920  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:51.884932  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:51.964282  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:51.964322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:52.004218  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:52.004251  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:52.056230  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:52.056266  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.069591  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:52.069622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:52.142552  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:54.643154  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:54.657199  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:54.657259  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:54.694124  301425 cri.go:89] found id: ""
	I0729 13:41:54.694152  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.694159  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:54.694165  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:54.694221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:54.732072  301425 cri.go:89] found id: ""
	I0729 13:41:54.732109  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.732119  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:54.732127  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:54.732194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:54.768257  301425 cri.go:89] found id: ""
	I0729 13:41:54.768294  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.768306  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:54.768314  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:54.768383  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:54.807596  301425 cri.go:89] found id: ""
	I0729 13:41:54.807631  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.807643  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:54.807651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:54.807716  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:54.845107  301425 cri.go:89] found id: ""
	I0729 13:41:54.845134  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.845142  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:54.845148  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:54.845197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:54.880627  301425 cri.go:89] found id: ""
	I0729 13:41:54.880655  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.880667  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:54.880675  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:54.880750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:54.918122  301425 cri.go:89] found id: ""
	I0729 13:41:54.918151  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.918159  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:54.918165  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:54.918219  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:54.956943  301425 cri.go:89] found id: ""
	I0729 13:41:54.956986  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.956999  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:54.957022  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:54.957036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:55.032512  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:55.032547  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:55.032564  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:55.116653  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:55.116699  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:55.177030  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:55.177059  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:55.238789  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:55.238831  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.166339  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:54.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:53.824694  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:56.324761  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:55.324698  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.326135  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.753504  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:57.766354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:57.766436  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:57.802691  301425 cri.go:89] found id: ""
	I0729 13:41:57.802728  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.802740  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:57.802746  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:57.802807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:57.839800  301425 cri.go:89] found id: ""
	I0729 13:41:57.839823  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.839830  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:57.839846  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:57.839902  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:57.881592  301425 cri.go:89] found id: ""
	I0729 13:41:57.881617  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.881625  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:57.881631  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:57.881681  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.916245  301425 cri.go:89] found id: ""
	I0729 13:41:57.916273  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.916282  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:57.916290  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:57.916346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:57.952224  301425 cri.go:89] found id: ""
	I0729 13:41:57.952261  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.952272  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:57.952280  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:57.952340  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:57.985508  301425 cri.go:89] found id: ""
	I0729 13:41:57.985537  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.985548  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:57.985557  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:57.985624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:58.022354  301425 cri.go:89] found id: ""
	I0729 13:41:58.022382  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.022391  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:58.022397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:58.022462  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:58.055865  301425 cri.go:89] found id: ""
	I0729 13:41:58.055891  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.055900  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:58.055914  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:58.055931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:58.069143  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:58.069177  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:58.143137  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:58.143164  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:58.143183  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:58.224631  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:58.224672  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:58.266437  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:58.266470  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:00.819300  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:00.834195  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:00.834258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:00.869660  301425 cri.go:89] found id: ""
	I0729 13:42:00.869697  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.869709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:00.869717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:00.869777  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:00.915601  301425 cri.go:89] found id: ""
	I0729 13:42:00.915630  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.915638  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:00.915644  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:00.915694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:00.956981  301425 cri.go:89] found id: ""
	I0729 13:42:00.957020  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.957028  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:00.957034  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:00.957094  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.166038  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.666455  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.666824  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:58.824729  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.825513  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.825074  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.826480  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.995761  301425 cri.go:89] found id: ""
	I0729 13:42:00.995793  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.995801  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:00.995817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:00.995869  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:01.047668  301425 cri.go:89] found id: ""
	I0729 13:42:01.047699  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.047707  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:01.047713  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:01.047787  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:01.085178  301425 cri.go:89] found id: ""
	I0729 13:42:01.085209  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.085217  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:01.085224  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:01.085278  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:01.125282  301425 cri.go:89] found id: ""
	I0729 13:42:01.125310  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.125320  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:01.125329  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:01.125396  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:01.165972  301425 cri.go:89] found id: ""
	I0729 13:42:01.166005  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.166021  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:01.166033  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:01.166049  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:01.236500  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:01.236523  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:01.236540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:01.320918  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:01.320959  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:01.366975  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:01.367015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:01.420347  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:01.420389  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:03.936048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:03.949603  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:03.949679  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:03.987529  301425 cri.go:89] found id: ""
	I0729 13:42:03.987557  301425 logs.go:276] 0 containers: []
	W0729 13:42:03.987567  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:03.987574  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:03.987639  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:04.027325  301425 cri.go:89] found id: ""
	I0729 13:42:04.027355  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.027365  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:04.027372  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:04.027437  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:04.063019  301425 cri.go:89] found id: ""
	I0729 13:42:04.063050  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.063059  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:04.063065  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:04.063117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:04.101106  301425 cri.go:89] found id: ""
	I0729 13:42:04.101135  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.101146  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:04.101153  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:04.101242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:04.137186  301425 cri.go:89] found id: ""
	I0729 13:42:04.137219  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.137230  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:04.137238  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:04.137302  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:04.175732  301425 cri.go:89] found id: ""
	I0729 13:42:04.175761  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.175770  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:04.175776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:04.175826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:04.213265  301425 cri.go:89] found id: ""
	I0729 13:42:04.213296  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.213307  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:04.213315  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:04.213381  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:04.248581  301425 cri.go:89] found id: ""
	I0729 13:42:04.248609  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.248617  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:04.248627  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:04.248643  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:04.303277  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:04.303400  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:04.317518  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:04.317547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:04.385209  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:04.385229  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:04.385242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:04.470629  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:04.470680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:04.167299  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.168006  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.324087  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:05.324904  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.826588  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.325326  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:08.326125  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.012455  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:07.028535  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:07.028621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:07.063453  301425 cri.go:89] found id: ""
	I0729 13:42:07.063496  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.063505  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:07.063511  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:07.063582  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:07.098243  301425 cri.go:89] found id: ""
	I0729 13:42:07.098274  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.098284  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:07.098291  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:07.098357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:07.138122  301425 cri.go:89] found id: ""
	I0729 13:42:07.138149  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.138157  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:07.138162  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:07.138213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:07.176772  301425 cri.go:89] found id: ""
	I0729 13:42:07.176814  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.176826  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:07.176835  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:07.176894  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:07.214867  301425 cri.go:89] found id: ""
	I0729 13:42:07.214898  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.214914  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:07.214920  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:07.214979  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:07.253443  301425 cri.go:89] found id: ""
	I0729 13:42:07.253471  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.253481  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:07.253490  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:07.253550  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:07.287284  301425 cri.go:89] found id: ""
	I0729 13:42:07.287326  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.287338  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:07.287349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:07.287411  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:07.330550  301425 cri.go:89] found id: ""
	I0729 13:42:07.330577  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.330588  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:07.330599  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:07.330620  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:07.384226  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:07.384268  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:07.398790  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:07.398817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:07.462868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:07.462893  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:07.462914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:07.538665  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:07.538706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.078452  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:10.091962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:10.092027  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:10.127401  301425 cri.go:89] found id: ""
	I0729 13:42:10.127434  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.127445  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:10.127454  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:10.127531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:10.161088  301425 cri.go:89] found id: ""
	I0729 13:42:10.161117  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.161127  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:10.161134  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:10.161187  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:10.199721  301425 cri.go:89] found id: ""
	I0729 13:42:10.199751  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.199763  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:10.199769  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:10.199821  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:10.237067  301425 cri.go:89] found id: ""
	I0729 13:42:10.237106  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.237120  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:10.237127  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:10.237191  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:10.275863  301425 cri.go:89] found id: ""
	I0729 13:42:10.275894  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.275909  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:10.275918  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:10.275981  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:10.313234  301425 cri.go:89] found id: ""
	I0729 13:42:10.313262  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.313270  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:10.313276  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:10.313334  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:10.353530  301425 cri.go:89] found id: ""
	I0729 13:42:10.353558  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.353569  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:10.353576  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:10.353644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:10.389488  301425 cri.go:89] found id: ""
	I0729 13:42:10.389516  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.389527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:10.389539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:10.389562  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.428705  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:10.428740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:10.484413  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:10.484456  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:10.499203  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:10.499248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:10.570868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:10.570894  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:10.570907  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:08.667158  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:11.166721  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.825638  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.324753  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.326752  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:12.826001  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:13.151788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:13.165297  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:13.165367  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:13.203752  301425 cri.go:89] found id: ""
	I0729 13:42:13.203786  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.203798  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:13.203805  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:13.203874  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:13.240454  301425 cri.go:89] found id: ""
	I0729 13:42:13.240491  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.240499  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:13.240504  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:13.240556  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:13.276508  301425 cri.go:89] found id: ""
	I0729 13:42:13.276536  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.276545  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:13.276553  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:13.276617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:13.311252  301425 cri.go:89] found id: ""
	I0729 13:42:13.311280  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.311291  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:13.311299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:13.311353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:13.351777  301425 cri.go:89] found id: ""
	I0729 13:42:13.351808  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.351817  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:13.351823  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:13.351881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:13.389020  301425 cri.go:89] found id: ""
	I0729 13:42:13.389049  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.389058  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:13.389064  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:13.389126  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:13.424353  301425 cri.go:89] found id: ""
	I0729 13:42:13.424387  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.424395  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:13.424401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:13.424451  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:13.460755  301425 cri.go:89] found id: ""
	I0729 13:42:13.460788  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.460817  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:13.460830  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:13.460850  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:13.500201  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:13.500234  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:13.553319  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:13.553357  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:13.567496  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:13.567529  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:13.644662  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:13.644686  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:13.644700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:13.667287  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.160289  301044 pod_ready.go:81] duration metric: took 4m0.000442608s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:16.160321  301044 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 13:42:16.160342  301044 pod_ready.go:38] duration metric: took 4m7.984743222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:16.160378  301044 kubeadm.go:597] duration metric: took 4m16.091281244s to restartPrimaryControlPlane
	W0729 13:42:16.160459  301044 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:16.160486  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:12.825387  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.826853  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.827679  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.829149  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326337  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326370  300746 pod_ready.go:81] duration metric: took 4m0.007721109s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:17.326383  300746 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:42:17.326392  300746 pod_ready.go:38] duration metric: took 4m8.417741792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:17.326410  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:42:17.326446  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:17.326514  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:17.373993  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.374027  300746 cri.go:89] found id: ""
	I0729 13:42:17.374037  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:17.374118  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.384841  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:17.384929  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:17.422219  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.422253  300746 cri.go:89] found id: ""
	I0729 13:42:17.422263  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:17.422349  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.427319  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:17.427385  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:17.469310  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:17.469336  300746 cri.go:89] found id: ""
	I0729 13:42:17.469347  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:17.469412  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.474501  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:17.474590  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:17.520767  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:17.520808  300746 cri.go:89] found id: ""
	I0729 13:42:17.520818  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:17.520881  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.525543  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:17.525643  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:17.572718  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.572749  300746 cri.go:89] found id: ""
	I0729 13:42:17.572758  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:17.572839  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.577227  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:17.577304  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:17.614076  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.614098  300746 cri.go:89] found id: ""
	I0729 13:42:17.614106  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:17.614153  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.618404  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:17.618479  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:17.666242  300746 cri.go:89] found id: ""
	I0729 13:42:17.666275  300746 logs.go:276] 0 containers: []
	W0729 13:42:17.666285  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:17.666301  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:17.666373  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:17.713379  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:17.713411  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:17.713418  300746 cri.go:89] found id: ""
	I0729 13:42:17.713428  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:17.713493  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.719026  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.723948  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:17.723974  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:17.743561  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:17.743607  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.803393  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:17.803425  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.855689  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:17.855723  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.898327  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:17.898361  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.951024  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:17.951060  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:18.014040  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:18.014082  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:18.159937  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:18.159984  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:18.201626  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:18.201667  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:18.247168  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:18.247211  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:18.291431  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:18.291469  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:18.333636  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:18.333671  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.226602  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:16.242934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:16.243005  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:16.284033  301425 cri.go:89] found id: ""
	I0729 13:42:16.284064  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.284075  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:16.284083  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:16.284152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:16.328362  301425 cri.go:89] found id: ""
	I0729 13:42:16.328388  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.328396  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:16.328402  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:16.328464  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:16.372664  301425 cri.go:89] found id: ""
	I0729 13:42:16.372701  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.372712  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:16.372727  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:16.372818  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:16.416085  301425 cri.go:89] found id: ""
	I0729 13:42:16.416119  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.416130  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:16.416138  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:16.416194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:16.457786  301425 cri.go:89] found id: ""
	I0729 13:42:16.457819  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.457830  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:16.457838  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:16.457903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:16.498929  301425 cri.go:89] found id: ""
	I0729 13:42:16.498962  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.498971  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:16.498979  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:16.499043  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:16.546159  301425 cri.go:89] found id: ""
	I0729 13:42:16.546187  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.546199  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:16.546207  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:16.546270  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:16.585010  301425 cri.go:89] found id: ""
	I0729 13:42:16.585041  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.585052  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:16.585065  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:16.585081  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:16.639033  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:16.639079  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:16.656209  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:16.656242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:16.734835  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:16.734863  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:16.734940  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.818756  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:16.818798  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.370796  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:19.384267  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:19.384354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:19.425595  301425 cri.go:89] found id: ""
	I0729 13:42:19.425629  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.425641  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:19.425650  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:19.425715  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:19.461470  301425 cri.go:89] found id: ""
	I0729 13:42:19.461506  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.461517  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:19.461524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:19.461592  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:19.508232  301425 cri.go:89] found id: ""
	I0729 13:42:19.508265  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.508275  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:19.508283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:19.508360  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:19.546226  301425 cri.go:89] found id: ""
	I0729 13:42:19.546259  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.546275  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:19.546283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:19.546354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:19.581125  301425 cri.go:89] found id: ""
	I0729 13:42:19.581156  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.581167  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:19.581176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:19.581242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:19.619680  301425 cri.go:89] found id: ""
	I0729 13:42:19.619719  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.619728  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:19.619736  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:19.619800  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:19.657096  301425 cri.go:89] found id: ""
	I0729 13:42:19.657126  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.657136  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:19.657142  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:19.657203  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:19.697247  301425 cri.go:89] found id: ""
	I0729 13:42:19.697277  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.697286  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:19.697297  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:19.697312  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:19.714900  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:19.714935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:19.794118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:19.794145  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:19.794161  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:19.907077  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:19.907122  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.949841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:19.949871  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:19.324474  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:21.826117  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:18.858720  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:18.858773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:21.419344  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:21.440121  300746 api_server.go:72] duration metric: took 4m17.790553991s to wait for apiserver process to appear ...
	I0729 13:42:21.440149  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:42:21.440190  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:21.440242  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:21.485874  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:21.485897  300746 cri.go:89] found id: ""
	I0729 13:42:21.485905  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:21.485956  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.490424  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:21.490493  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:21.532174  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:21.532202  300746 cri.go:89] found id: ""
	I0729 13:42:21.532211  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:21.532259  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.536561  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:21.536622  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:21.579375  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:21.579397  300746 cri.go:89] found id: ""
	I0729 13:42:21.579404  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:21.579450  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.584710  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:21.584779  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:21.621437  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.621465  300746 cri.go:89] found id: ""
	I0729 13:42:21.621475  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:21.621536  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.625829  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:21.625898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:21.666063  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:21.666086  300746 cri.go:89] found id: ""
	I0729 13:42:21.666095  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:21.666162  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.670822  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:21.670898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:21.713993  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:21.714022  300746 cri.go:89] found id: ""
	I0729 13:42:21.714032  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:21.714099  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.718967  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:21.719044  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:21.761282  300746 cri.go:89] found id: ""
	I0729 13:42:21.761312  300746 logs.go:276] 0 containers: []
	W0729 13:42:21.761320  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:21.761327  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:21.761390  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:21.810085  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:21.810114  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:21.810121  300746 cri.go:89] found id: ""
	I0729 13:42:21.810130  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:21.810185  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.814713  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.819968  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:21.819996  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:21.834798  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:21.834823  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:21.957963  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:21.958000  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.995345  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:21.995376  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:22.037737  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:22.037773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:22.074774  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:22.074813  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:22.123172  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.123205  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.181432  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:22.181473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:22.237128  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:22.237162  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:22.285733  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:22.285766  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:22.328258  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:22.328291  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:22.381239  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.381276  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:22.840466  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:22.840504  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:22.515296  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:22.529187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:22.529286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:22.573033  301425 cri.go:89] found id: ""
	I0729 13:42:22.573070  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.573082  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:22.573091  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:22.573152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:22.608443  301425 cri.go:89] found id: ""
	I0729 13:42:22.608476  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.608489  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:22.608496  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:22.608566  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:22.641672  301425 cri.go:89] found id: ""
	I0729 13:42:22.641704  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.641716  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:22.641724  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:22.641781  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:22.673902  301425 cri.go:89] found id: ""
	I0729 13:42:22.673934  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.673944  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:22.673952  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:22.674012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:22.715131  301425 cri.go:89] found id: ""
	I0729 13:42:22.715165  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.715179  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:22.715187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:22.715251  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:22.748807  301425 cri.go:89] found id: ""
	I0729 13:42:22.748838  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.748848  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:22.748856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:22.748924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:22.781972  301425 cri.go:89] found id: ""
	I0729 13:42:22.782002  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.782012  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:22.782021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:22.782088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:22.815791  301425 cri.go:89] found id: ""
	I0729 13:42:22.815823  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.815834  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:22.815848  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.815864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.873595  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:22.873631  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:22.888081  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:22.888123  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:22.959873  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:22.959899  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.959912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:23.040996  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:23.041035  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:25.585159  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:25.604154  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.604240  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.645428  301425 cri.go:89] found id: ""
	I0729 13:42:25.645459  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.645466  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:25.645474  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.645534  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.682758  301425 cri.go:89] found id: ""
	I0729 13:42:25.682785  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.682793  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:25.682799  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.682864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.724297  301425 cri.go:89] found id: ""
	I0729 13:42:25.724330  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.724341  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:25.724349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.724401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.761124  301425 cri.go:89] found id: ""
	I0729 13:42:25.761157  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.761168  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:25.761177  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.761229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.802698  301425 cri.go:89] found id: ""
	I0729 13:42:25.802728  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.802741  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:25.802750  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.802804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.840472  301425 cri.go:89] found id: ""
	I0729 13:42:25.840499  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.840509  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:25.840516  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.840586  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.875217  301425 cri.go:89] found id: ""
	I0729 13:42:25.875255  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.875267  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.875273  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:25.875345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:25.919895  301425 cri.go:89] found id: ""
	I0729 13:42:25.919937  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.919948  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:25.919963  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.919988  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:24.324138  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:26.324843  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:25.399606  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:42:25.405339  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:42:25.406585  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:42:25.406607  300746 api_server.go:131] duration metric: took 3.966451518s to wait for apiserver health ...
	I0729 13:42:25.406615  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:42:25.406640  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.406686  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.442039  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:25.442068  300746 cri.go:89] found id: ""
	I0729 13:42:25.442079  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:25.442140  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.446769  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.446830  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.482122  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:25.482144  300746 cri.go:89] found id: ""
	I0729 13:42:25.482156  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:25.482211  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.486666  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.486729  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.534553  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:25.534584  300746 cri.go:89] found id: ""
	I0729 13:42:25.534595  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:25.534657  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.539546  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.539624  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.577538  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.577562  300746 cri.go:89] found id: ""
	I0729 13:42:25.577572  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:25.577635  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.582377  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.582457  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.628918  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:25.628945  300746 cri.go:89] found id: ""
	I0729 13:42:25.628955  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:25.629027  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.633502  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.633592  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.673133  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.673156  300746 cri.go:89] found id: ""
	I0729 13:42:25.673163  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:25.673210  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.677905  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.677994  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.724757  300746 cri.go:89] found id: ""
	I0729 13:42:25.724780  300746 logs.go:276] 0 containers: []
	W0729 13:42:25.724805  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.724813  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:25.724887  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:25.775101  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.775130  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:25.775136  300746 cri.go:89] found id: ""
	I0729 13:42:25.775144  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:25.775219  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.782008  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.787032  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:25.787064  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.834985  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:25.835026  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.897295  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:25.897338  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.938020  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.938053  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:26.002775  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:26.002808  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:26.021431  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:26.021473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:26.071861  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:26.071898  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:26.130018  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:26.130057  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:26.170233  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:26.170290  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:26.207687  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.207718  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.600518  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:26.600575  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:26.707024  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:26.707074  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:26.753205  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.753240  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:29.302597  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:42:29.302626  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.302630  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.302634  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.302638  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.302641  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.302644  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.302649  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.302654  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.302661  300746 system_pods.go:74] duration metric: took 3.896040202s to wait for pod list to return data ...
	I0729 13:42:29.302670  300746 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:42:29.305640  300746 default_sa.go:45] found service account: "default"
	I0729 13:42:29.305668  300746 default_sa.go:55] duration metric: took 2.989028ms for default service account to be created ...
	I0729 13:42:29.305679  300746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:42:29.310472  300746 system_pods.go:86] 8 kube-system pods found
	I0729 13:42:29.310495  300746 system_pods.go:89] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.310500  300746 system_pods.go:89] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.310505  300746 system_pods.go:89] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.310509  300746 system_pods.go:89] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.310513  300746 system_pods.go:89] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.310517  300746 system_pods.go:89] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.310523  300746 system_pods.go:89] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.310528  300746 system_pods.go:89] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.310536  300746 system_pods.go:126] duration metric: took 4.851477ms to wait for k8s-apps to be running ...
	I0729 13:42:29.310545  300746 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:42:29.310580  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.329123  300746 system_svc.go:56] duration metric: took 18.569258ms WaitForService to wait for kubelet
	I0729 13:42:29.329155  300746 kubeadm.go:582] duration metric: took 4m25.679589837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:42:29.329182  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:42:29.332696  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:42:29.332726  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:42:29.332741  300746 node_conditions.go:105] duration metric: took 3.551684ms to run NodePressure ...
	I0729 13:42:29.332756  300746 start.go:241] waiting for startup goroutines ...
	I0729 13:42:29.332770  300746 start.go:246] waiting for cluster config update ...
	I0729 13:42:29.332784  300746 start.go:255] writing updated cluster config ...
	I0729 13:42:29.333168  300746 ssh_runner.go:195] Run: rm -f paused
	I0729 13:42:29.394738  300746 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 13:42:29.396826  300746 out.go:177] * Done! kubectl is now configured to use "no-preload-566777" cluster and "default" namespace by default
	I0729 13:42:25.981964  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:25.982005  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:25.997546  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:25.997576  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:26.075879  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:26.075901  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.075917  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.158552  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.158593  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:28.704328  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:28.718946  301425 kubeadm.go:597] duration metric: took 4m3.546660825s to restartPrimaryControlPlane
	W0729 13:42:28.719041  301425 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:28.719086  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:29.251866  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.267009  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:29.277498  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:29.287980  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:29.288003  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:29.288054  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:42:29.297830  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:29.297890  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:29.308263  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:42:29.318332  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:29.318388  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:29.328684  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.339841  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:29.339894  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.351304  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:42:29.363901  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:29.363960  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:29.377255  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:29.453113  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:42:29.453212  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:29.609835  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:29.609970  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:29.610106  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:29.812529  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:29.814455  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:29.814551  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:29.814633  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:29.814727  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:29.814799  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:29.814915  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:29.814979  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:29.815695  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:29.816098  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:29.816602  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:29.817114  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:29.817184  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:29.817266  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:30.122967  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:30.287162  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:30.336346  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:30.516317  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:30.532829  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:30.533732  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:30.533809  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:30.672345  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:30.674334  301425 out.go:204]   - Booting up control plane ...
	I0729 13:42:30.674492  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:30.681661  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:30.681784  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:30.683350  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:30.687290  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:42:28.327998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:30.823998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:32.824105  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:34.825475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:37.324435  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:39.824490  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:42.323305  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:44.329376  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:46.823645  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:47.980926  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.820407091s)
	I0729 13:42:47.981010  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:47.997344  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:48.007813  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:48.017519  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:48.017538  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:48.017579  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:42:48.028739  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:48.028819  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:48.038417  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:42:48.047864  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:48.047921  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:48.057408  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.066977  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:48.067040  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.077017  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:42:48.087204  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:48.087267  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:48.097659  301044 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:48.149712  301044 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:42:48.149883  301044 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:48.277280  301044 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:48.277441  301044 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:48.277578  301044 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:48.505523  301044 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:48.507718  301044 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:48.507827  301044 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:48.507941  301044 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:48.508049  301044 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:48.508139  301044 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:48.508245  301044 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:48.508334  301044 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:48.508431  301044 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:48.508518  301044 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:48.508622  301044 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:48.508740  301044 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:48.508824  301044 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:48.508949  301044 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:48.545220  301044 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:48.620528  301044 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:42:48.781015  301044 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:49.039301  301044 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:49.104540  301044 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:49.105022  301044 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:49.107524  301044 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:49.109579  301044 out.go:204]   - Booting up control plane ...
	I0729 13:42:49.109698  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:49.109836  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:49.109924  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:49.129789  301044 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:49.130766  301044 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:49.130844  301044 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:49.272901  301044 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:42:49.273017  301044 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:42:50.274804  301044 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001903151s
	I0729 13:42:50.274906  301044 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:42:48.825621  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:51.324025  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.276427  301044 kubeadm.go:310] [api-check] The API server is healthy after 5.001280529s
	I0729 13:42:55.289666  301044 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:42:55.309747  301044 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:42:55.343304  301044 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:42:55.343537  301044 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-972693 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:42:55.366319  301044 kubeadm.go:310] [bootstrap-token] Using token: bvsox4.ktqddck1jfi3aduz
	I0729 13:42:55.367592  301044 out.go:204]   - Configuring RBAC rules ...
	I0729 13:42:55.367695  301044 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:42:55.380118  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:42:55.393704  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:42:55.397859  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:42:55.401567  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:42:55.407851  301044 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:42:55.684714  301044 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:42:56.128597  301044 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:42:56.683879  301044 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:42:56.685050  301044 kubeadm.go:310] 
	I0729 13:42:56.685127  301044 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:42:56.685137  301044 kubeadm.go:310] 
	I0729 13:42:56.685216  301044 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:42:56.685226  301044 kubeadm.go:310] 
	I0729 13:42:56.685252  301044 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:42:56.685335  301044 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:42:56.685414  301044 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:42:56.685422  301044 kubeadm.go:310] 
	I0729 13:42:56.685527  301044 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:42:56.685550  301044 kubeadm.go:310] 
	I0729 13:42:56.685607  301044 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:42:56.685617  301044 kubeadm.go:310] 
	I0729 13:42:56.685684  301044 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:42:56.685800  301044 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:42:56.685916  301044 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:42:56.685933  301044 kubeadm.go:310] 
	I0729 13:42:56.686048  301044 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:42:56.686149  301044 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:42:56.686162  301044 kubeadm.go:310] 
	I0729 13:42:56.686277  301044 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686416  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 \
	I0729 13:42:56.686449  301044 kubeadm.go:310] 	--control-plane 
	I0729 13:42:56.686462  301044 kubeadm.go:310] 
	I0729 13:42:56.686562  301044 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:42:56.686571  301044 kubeadm.go:310] 
	I0729 13:42:56.686687  301044 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686839  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 
	I0729 13:42:56.687046  301044 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:42:56.687123  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:42:56.687140  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:42:56.689013  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:42:53.324453  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.326475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:56.690282  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:42:56.703026  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:42:56.722677  301044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-972693 minikube.k8s.io/updated_at=2024_07_29T13_42_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=default-k8s-diff-port-972693 minikube.k8s.io/primary=true
	I0729 13:42:56.738921  301044 ops.go:34] apiserver oom_adj: -16
	I0729 13:42:56.902369  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.402842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.902902  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.403358  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.903112  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.402540  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.902605  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.402440  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.903011  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:01.403295  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.823966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:00.323772  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:01.818493  300705 pod_ready.go:81] duration metric: took 4m0.000972043s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:43:01.818528  300705 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:43:01.818537  300705 pod_ready.go:38] duration metric: took 4m4.037818748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:01.818555  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:01.818589  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:01.818643  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:01.874334  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:01.874359  300705 cri.go:89] found id: ""
	I0729 13:43:01.874369  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:01.874439  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.879122  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:01.879214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:01.919779  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:01.919804  300705 cri.go:89] found id: ""
	I0729 13:43:01.919814  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:01.919874  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.924895  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:01.924963  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:01.970365  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:01.970386  300705 cri.go:89] found id: ""
	I0729 13:43:01.970394  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:01.970444  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.975331  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:01.975409  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:02.013029  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.013062  300705 cri.go:89] found id: ""
	I0729 13:43:02.013074  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:02.013136  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.017958  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:02.018019  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:02.062357  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.062385  300705 cri.go:89] found id: ""
	I0729 13:43:02.062394  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:02.062463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.066791  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:02.066841  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:02.103790  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:02.103812  300705 cri.go:89] found id: ""
	I0729 13:43:02.103821  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:02.103882  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.108242  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:02.108293  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:02.151089  300705 cri.go:89] found id: ""
	I0729 13:43:02.151122  300705 logs.go:276] 0 containers: []
	W0729 13:43:02.151133  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:02.151141  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:02.151204  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:02.205700  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:02.205727  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.205732  300705 cri.go:89] found id: ""
	I0729 13:43:02.205741  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:02.205790  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.210332  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.214889  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:02.214913  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:02.229589  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:02.229621  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:02.278361  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:02.278394  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:02.319117  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:02.319146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.357874  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:02.357908  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.402114  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:02.402146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.442480  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:02.442514  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:01.903256  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.403400  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.902925  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.402616  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.903161  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.403255  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.902489  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.402506  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.902530  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:06.402436  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.953914  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:02.953961  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:03.013404  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:03.013441  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:03.151261  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:03.151294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:03.199910  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:03.199964  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:03.257103  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:03.257137  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:03.308519  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:03.308559  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:05.857929  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:05.878306  300705 api_server.go:72] duration metric: took 4m15.820258046s to wait for apiserver process to appear ...
	I0729 13:43:05.878338  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:05.878383  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:05.878451  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:05.924031  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:05.924071  300705 cri.go:89] found id: ""
	I0729 13:43:05.924083  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:05.924151  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.929284  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:05.929363  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:05.968980  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:05.969003  300705 cri.go:89] found id: ""
	I0729 13:43:05.969010  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:05.969056  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.973451  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:05.973516  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:06.011760  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.011784  300705 cri.go:89] found id: ""
	I0729 13:43:06.011794  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:06.011857  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.016065  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:06.016132  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:06.066319  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.066345  300705 cri.go:89] found id: ""
	I0729 13:43:06.066353  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:06.066420  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.071060  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:06.071120  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:06.117383  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.117405  300705 cri.go:89] found id: ""
	I0729 13:43:06.117413  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:06.117463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.121968  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:06.122053  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:06.156125  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.156151  300705 cri.go:89] found id: ""
	I0729 13:43:06.156160  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:06.156209  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.160301  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:06.160366  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:06.206751  300705 cri.go:89] found id: ""
	I0729 13:43:06.206780  300705 logs.go:276] 0 containers: []
	W0729 13:43:06.206790  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:06.206798  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:06.206860  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:06.248884  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.248918  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:06.248925  300705 cri.go:89] found id: ""
	I0729 13:43:06.248936  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:06.249006  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.253087  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.257229  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:06.257252  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.291495  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:06.291528  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.330190  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:06.330219  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.366500  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:06.366536  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.424871  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:06.424906  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:06.855025  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:06.855069  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:06.870025  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:06.870055  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:06.986590  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:06.986630  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:07.036972  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:07.037007  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:07.092602  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:07.092646  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:07.135326  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:07.135366  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:07.190208  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:07.190247  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:07.241865  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:07.241896  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.902842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.402861  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.903148  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.402619  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.902869  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.403349  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.903277  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.402468  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.535843  301044 kubeadm.go:1113] duration metric: took 13.813154738s to wait for elevateKubeSystemPrivileges
	I0729 13:43:10.535879  301044 kubeadm.go:394] duration metric: took 5m10.527995876s to StartCluster
	I0729 13:43:10.535899  301044 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.535991  301044 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:43:10.538845  301044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.539141  301044 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:43:10.539343  301044 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:43:10.539513  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:43:10.539528  301044 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539556  301044 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539574  301044 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-972693"
	I0729 13:43:10.539587  301044 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-972693"
	I0729 13:43:10.539600  301044 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539623  301044 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.539635  301044 addons.go:243] addon metrics-server should already be in state true
	I0729 13:43:10.539692  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	W0729 13:43:10.539594  301044 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:43:10.539817  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.540342  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540368  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540380  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540399  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540664  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540814  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.542249  301044 out.go:177] * Verifying Kubernetes components...
	I0729 13:43:10.543974  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:43:10.561555  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0729 13:43:10.561585  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I0729 13:43:10.561820  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0729 13:43:10.562096  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562160  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562579  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562694  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562711  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.562750  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562766  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563224  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563236  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563496  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.563516  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563793  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563923  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.563959  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563982  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.564526  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.564781  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.569041  301044 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.569062  301044 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:43:10.569091  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.569443  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.569462  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.580340  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I0729 13:43:10.580852  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.581371  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.581384  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.581724  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.581911  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.583937  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I0729 13:43:10.584108  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.584422  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.584864  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.584881  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.585262  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.585445  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.586285  301044 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:43:10.586973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.587855  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:43:10.587873  301044 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:43:10.587907  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.588885  301044 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:43:10.689091  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:43:10.689558  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:10.689837  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:10.590240  301044 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.590258  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:43:10.590275  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.592026  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42695
	I0729 13:43:10.592306  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.592778  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.592859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.592877  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.593162  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.593295  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.593382  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.593455  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.593663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594055  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.594082  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594233  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.594388  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.594485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.594621  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.594882  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.594892  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.595227  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.595663  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.595680  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.611094  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0729 13:43:10.611617  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.612200  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.612224  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.612600  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.612973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.614541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.614743  301044 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:10.614757  301044 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:43:10.614774  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.617611  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.618064  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.618416  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.618595  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.618754  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.791924  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:43:10.850744  301044 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866102  301044 node_ready.go:49] node "default-k8s-diff-port-972693" has status "Ready":"True"
	I0729 13:43:10.866137  301044 node_ready.go:38] duration metric: took 15.35404ms for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866171  301044 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:10.877661  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:10.958120  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.981335  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:43:10.981363  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:43:10.982804  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:11.145078  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:43:11.145108  301044 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:43:11.236628  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:11.236658  301044 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:43:11.308646  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.315025489s)
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290345752s)
	I0729 13:43:12.273254  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273270  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273283  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273296  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273572  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273589  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273598  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273606  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273704  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273721  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273731  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273739  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.275558  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275601  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275616  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.275624  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275634  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275644  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.309442  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.309473  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.309839  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.309888  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.309909  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.464546  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.155855113s)
	I0729 13:43:12.464601  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.464614  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465037  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465060  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465071  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.465081  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465398  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.465418  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465476  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465494  301044 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-972693"
	I0729 13:43:12.467315  301044 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:43:09.811571  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:43:09.817221  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:43:09.818319  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:09.818342  300705 api_server.go:131] duration metric: took 3.939996032s to wait for apiserver health ...
	I0729 13:43:09.818350  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:09.818373  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:09.818425  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:09.861856  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:09.861883  300705 cri.go:89] found id: ""
	I0729 13:43:09.861894  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:09.861962  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.867142  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:09.867216  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:09.909767  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:09.909795  300705 cri.go:89] found id: ""
	I0729 13:43:09.909808  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:09.909877  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.914410  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:09.914482  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:09.953540  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:09.953568  300705 cri.go:89] found id: ""
	I0729 13:43:09.953578  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:09.953637  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.958140  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:09.958214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:09.999809  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:09.999836  300705 cri.go:89] found id: ""
	I0729 13:43:09.999846  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:09.999911  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.004505  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:10.004587  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:10.049146  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.049173  300705 cri.go:89] found id: ""
	I0729 13:43:10.049182  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:10.049252  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.053631  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:10.053698  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:10.090361  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.090386  300705 cri.go:89] found id: ""
	I0729 13:43:10.090396  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:10.090442  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.095528  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:10.095588  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:10.131892  300705 cri.go:89] found id: ""
	I0729 13:43:10.131925  300705 logs.go:276] 0 containers: []
	W0729 13:43:10.131937  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:10.131944  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:10.132008  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:10.169101  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.169127  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.169133  300705 cri.go:89] found id: ""
	I0729 13:43:10.169142  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:10.169203  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.174716  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.179196  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:10.179217  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:10.222803  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:10.222833  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:10.265944  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:10.265975  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.310266  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:10.310294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.370562  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:10.370611  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.415759  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:10.415803  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:10.467672  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:10.467702  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:10.531249  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:10.531293  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:10.550454  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:10.550485  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:10.709028  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:10.709068  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:10.761048  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:10.761093  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:10.813125  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:10.813169  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.852581  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:10.852608  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:13.725236  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:43:13.725272  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.725279  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.725284  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.725289  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.725293  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.725298  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.725306  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.725312  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.725322  300705 system_pods.go:74] duration metric: took 3.906966083s to wait for pod list to return data ...
	I0729 13:43:13.725335  300705 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:13.727954  300705 default_sa.go:45] found service account: "default"
	I0729 13:43:13.727984  300705 default_sa.go:55] duration metric: took 2.638639ms for default service account to be created ...
	I0729 13:43:13.728032  300705 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:13.733141  300705 system_pods.go:86] 8 kube-system pods found
	I0729 13:43:13.733163  300705 system_pods.go:89] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.733169  300705 system_pods.go:89] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.733173  300705 system_pods.go:89] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.733177  300705 system_pods.go:89] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.733181  300705 system_pods.go:89] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.733185  300705 system_pods.go:89] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.733191  300705 system_pods.go:89] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.733196  300705 system_pods.go:89] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.733205  300705 system_pods.go:126] duration metric: took 5.16021ms to wait for k8s-apps to be running ...
	I0729 13:43:13.733213  300705 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:13.733255  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:13.755011  300705 system_svc.go:56] duration metric: took 21.784065ms WaitForService to wait for kubelet
	I0729 13:43:13.755042  300705 kubeadm.go:582] duration metric: took 4m23.697000108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:13.755068  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:13.758549  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:13.758572  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:13.758586  300705 node_conditions.go:105] duration metric: took 3.512205ms to run NodePressure ...
	I0729 13:43:13.758601  300705 start.go:241] waiting for startup goroutines ...
	I0729 13:43:13.758612  300705 start.go:246] waiting for cluster config update ...
	I0729 13:43:13.758625  300705 start.go:255] writing updated cluster config ...
	I0729 13:43:13.758945  300705 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:13.810333  300705 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:13.812397  300705 out.go:177] * Done! kubectl is now configured to use "embed-certs-135920" cluster and "default" namespace by default
	I0729 13:43:12.468541  301044 addons.go:510] duration metric: took 1.929219306s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:43:12.887280  301044 pod_ready.go:102] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:13.386255  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.386279  301044 pod_ready.go:81] duration metric: took 2.508586907s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.386291  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391278  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.391302  301044 pod_ready.go:81] duration metric: took 5.00403ms for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391313  301044 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396324  301044 pod_ready.go:92] pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.396343  301044 pod_ready.go:81] duration metric: took 5.022707ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396350  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403008  301044 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.403026  301044 pod_ready.go:81] duration metric: took 6.670677ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403035  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407836  301044 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.407856  301044 pod_ready.go:81] duration metric: took 4.814401ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407868  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783140  301044 pod_ready.go:92] pod "kube-proxy-tfsk9" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.783168  301044 pod_ready.go:81] duration metric: took 375.291599ms for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783181  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182560  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:14.182588  301044 pod_ready.go:81] duration metric: took 399.399691ms for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182597  301044 pod_ready.go:38] duration metric: took 3.316409576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:14.182610  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:14.182661  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:14.210715  301044 api_server.go:72] duration metric: took 3.671529553s to wait for apiserver process to appear ...
	I0729 13:43:14.210749  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:14.210790  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:43:14.214886  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:43:14.215773  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:14.215795  301044 api_server.go:131] duration metric: took 5.0389ms to wait for apiserver health ...
	I0729 13:43:14.215802  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:14.386356  301044 system_pods.go:59] 9 kube-system pods found
	I0729 13:43:14.386389  301044 system_pods.go:61] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.386394  301044 system_pods.go:61] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.386398  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.386401  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.386405  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.386409  301044 system_pods.go:61] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.386412  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.386417  301044 system_pods.go:61] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.386420  301044 system_pods.go:61] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.386430  301044 system_pods.go:74] duration metric: took 170.622271ms to wait for pod list to return data ...
	I0729 13:43:14.386437  301044 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:14.582618  301044 default_sa.go:45] found service account: "default"
	I0729 13:43:14.582643  301044 default_sa.go:55] duration metric: took 196.19918ms for default service account to be created ...
	I0729 13:43:14.582652  301044 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:14.785669  301044 system_pods.go:86] 9 kube-system pods found
	I0729 13:43:14.785701  301044 system_pods.go:89] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.785707  301044 system_pods.go:89] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.785711  301044 system_pods.go:89] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.785719  301044 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.785723  301044 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.785727  301044 system_pods.go:89] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.785731  301044 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.785737  301044 system_pods.go:89] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.785741  301044 system_pods.go:89] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.785750  301044 system_pods.go:126] duration metric: took 203.092668ms to wait for k8s-apps to be running ...
	I0729 13:43:14.785756  301044 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:14.785801  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:14.802927  301044 system_svc.go:56] duration metric: took 17.160927ms WaitForService to wait for kubelet
	I0729 13:43:14.802957  301044 kubeadm.go:582] duration metric: took 4.263780375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:14.802977  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:14.983106  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:14.983135  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:14.983146  301044 node_conditions.go:105] duration metric: took 180.164781ms to run NodePressure ...
	I0729 13:43:14.983159  301044 start.go:241] waiting for startup goroutines ...
	I0729 13:43:14.983165  301044 start.go:246] waiting for cluster config update ...
	I0729 13:43:14.983175  301044 start.go:255] writing updated cluster config ...
	I0729 13:43:14.983443  301044 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:15.038438  301044 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:15.040318  301044 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-972693" cluster and "default" namespace by default
	I0729 13:43:15.690809  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:15.691011  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:25.691962  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:25.692244  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:45.693269  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:45.693473  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696107  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:44:25.696300  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696307  301425 kubeadm.go:310] 
	I0729 13:44:25.696341  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:44:25.696400  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:44:25.696419  301425 kubeadm.go:310] 
	I0729 13:44:25.696463  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:44:25.696510  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:44:25.696653  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:44:25.696674  301425 kubeadm.go:310] 
	I0729 13:44:25.696818  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:44:25.696868  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:44:25.696921  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:44:25.696930  301425 kubeadm.go:310] 
	I0729 13:44:25.697076  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:44:25.697192  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:44:25.697206  301425 kubeadm.go:310] 
	I0729 13:44:25.697349  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:44:25.697459  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:44:25.697568  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:44:25.697669  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:44:25.697680  301425 kubeadm.go:310] 
	I0729 13:44:25.698359  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:44:25.698490  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:44:25.698596  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 13:44:25.698771  301425 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 13:44:25.698848  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:44:26.160539  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:44:26.175482  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:44:26.185562  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:44:26.185593  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:44:26.185657  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:44:26.195781  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:44:26.195865  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:44:26.207404  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:44:26.217068  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:44:26.217188  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:44:26.226075  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.234622  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:44:26.234684  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.243756  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:44:26.252630  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:44:26.252695  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:44:26.262846  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:44:26.340215  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:44:26.340318  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:44:26.496049  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:44:26.496199  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:44:26.496327  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:44:26.678135  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:44:26.680089  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:44:26.680173  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:44:26.680257  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:44:26.680378  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:44:26.680470  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:44:26.680570  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:44:26.680653  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:44:26.680751  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:44:26.681022  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:44:26.681519  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:44:26.681876  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:44:26.681994  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:44:26.682083  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:44:26.762680  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:44:26.922517  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:44:26.973731  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:44:27.193064  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:44:27.216477  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:44:27.219036  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:44:27.219293  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:44:27.386424  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:44:27.388194  301425 out.go:204]   - Booting up control plane ...
	I0729 13:44:27.388340  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:44:27.390345  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:44:27.391455  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:44:27.392303  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:44:27.394301  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:45:07.396989  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:45:07.397449  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:07.397719  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:12.397982  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:12.398297  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:22.398751  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:22.399010  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:42.399462  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:42.399675  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398413  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:46:22.398684  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398700  301425 kubeadm.go:310] 
	I0729 13:46:22.398763  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:46:22.398844  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:46:22.398886  301425 kubeadm.go:310] 
	I0729 13:46:22.398948  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:46:22.399002  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:46:22.399132  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:46:22.399145  301425 kubeadm.go:310] 
	I0729 13:46:22.399287  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:46:22.399346  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:46:22.399392  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:46:22.399404  301425 kubeadm.go:310] 
	I0729 13:46:22.399530  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:46:22.399610  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:46:22.399617  301425 kubeadm.go:310] 
	I0729 13:46:22.399735  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:46:22.399844  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:46:22.399943  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:46:22.400021  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:46:22.400035  301425 kubeadm.go:310] 
	I0729 13:46:22.400291  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:46:22.400370  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:46:22.400440  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 13:46:22.400520  301425 kubeadm.go:394] duration metric: took 7m57.286753846s to StartCluster
	I0729 13:46:22.400612  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:46:22.400692  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:46:22.446188  301425 cri.go:89] found id: ""
	I0729 13:46:22.446216  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.446225  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:46:22.446232  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:46:22.446289  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:46:22.484089  301425 cri.go:89] found id: ""
	I0729 13:46:22.484118  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.484128  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:46:22.484135  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:46:22.484197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:46:22.526817  301425 cri.go:89] found id: ""
	I0729 13:46:22.526846  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.526854  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:46:22.526860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:46:22.526912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:46:22.564787  301425 cri.go:89] found id: ""
	I0729 13:46:22.564834  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.564846  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:46:22.564854  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:46:22.564920  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:46:22.601843  301425 cri.go:89] found id: ""
	I0729 13:46:22.601881  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.601892  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:46:22.601900  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:46:22.601980  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:46:22.637420  301425 cri.go:89] found id: ""
	I0729 13:46:22.637448  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.637455  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:46:22.637462  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:46:22.637519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:46:22.672427  301425 cri.go:89] found id: ""
	I0729 13:46:22.672465  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.672476  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:46:22.672485  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:46:22.672549  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:46:22.708256  301425 cri.go:89] found id: ""
	I0729 13:46:22.708285  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.708294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:46:22.708306  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:46:22.708323  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:46:22.819287  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:46:22.819327  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:46:22.859298  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:46:22.859339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:46:22.914290  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:46:22.914342  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:46:22.936919  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:46:22.936951  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:46:23.035889  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0729 13:46:23.035939  301425 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 13:46:23.035991  301425 out.go:239] * 
	W0729 13:46:23.036103  301425 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.036137  301425 out.go:239] * 
	W0729 13:46:23.037370  301425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:46:23.040573  301425 out.go:177] 
	W0729 13:46:23.042130  301425 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.042173  301425 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 13:46:23.042193  301425 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 13:46:23.043539  301425 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.329432258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261137329391208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf522a71-f182-48bd-a4ef-074f492deb72 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.330703622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7de0974a-2072-4def-aa46-f6275b365fa2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.330766096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7de0974a-2072-4def-aa46-f6275b365fa2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.331010774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04ef4fd91577506a5a1e3c63dca7660c357c1b3e5088df7d2b328b6cc4cd48a,PodSandboxId:f617655f1ffc9341a2eb78456b2f31fd893c3a071d5b1d80802d58823fcc309e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260592757550758,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b577293-6827-4c76-a404-6b53739ae6e9,},Annotations:map[string]string{io.kubernetes.container.hash: b287aae0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52dcd83ed085799e110ef5addf79202e933b852380cb2c894550861772f27194,PodSandboxId:5cba4a42b70da73f69c7016235e805e2de70979f3112456592dfddb73a72437d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591904081965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t29vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4d8867-523f-4115-b3dd-76a9e2765af1,},Annotations:map[string]string{io.kubernetes.container.hash: a47ee6a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f3e11d124588260a25997609789489e01d15234bd3481eddd4d4ebca0e0b97,PodSandboxId:7e0a4668e08d707cb73b7df3a75deddc957cb80f4bd9ede5c8a988c9cd6f93c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591763675527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zlz8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aecbb6c3-53d7-4497-a26f-c41a7795681a,},Annotations:map[string]string{io.kubernetes.container.hash: 9db8795,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6deaf42164b308b808f5f016359c422290a6f03a0cbf54cfda984c1545f973b8,PodSandboxId:b7f6a66020f31c30226795125159442b8fc2cdf4c3b1c2ef5cdf203b04fbafce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722260590931971079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfsk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952c235c-310b-4f82-ba2d-fe06f3556a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a30a1c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbfcefc4958f17f430033a2802d4d325efe3b9f6181f5523ef427df49277357,PodSandboxId:c5ef647f2c6a32fde8e2679b1b55354a6cec2c43a2935898b7026bcff29a2781,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260570544915087
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d36b6bf9ef24d0d4b5362b88fa5c3794,},Annotations:map[string]string{io.kubernetes.container.hash: bf329a7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c623732bc8fb3c27e7c14152546d83951379549a45125f67fb679aa871e404dc,PodSandboxId:2fe988c66f5636c3978b8549517cae3c2cc236900974fafae586223ad5263c93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172226057054
3171555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c62fc2ec64dd484d55d238738e3faa,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4531ab2ab82393edb07eda93f3e1dd12f3b081c74951d0a57d53d053bd33c4,PodSandboxId:e28425cea2e23d8bcdf4075256c3cea2c0a61efbfbf4a4f2d99b1b0fd46ca7bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172226
0570528201134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 978c0ae27d44e4f25f868861978552ab,},Annotations:map[string]string{io.kubernetes.container.hash: 25f21c04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0921a50fe257a83ea78e146ecf0c71b92c5779cbeb07ab4968f7c1c0c4c612,PodSandboxId:44d2533ee117cad6fc42ee59fb9d3efbb28f220cb2c9cfe7e4f32c62b8e97eec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260570481349335,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cda54b08904e20832fd8542849c6d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7de0974a-2072-4def-aa46-f6275b365fa2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.379168060Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1dd3417-e521-43f7-8882-0961be44af2f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.379271220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1dd3417-e521-43f7-8882-0961be44af2f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.381037614Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b2fed44-7cc4-4537-87a1-a1d278c58949 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.381580617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261137381554856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b2fed44-7cc4-4537-87a1-a1d278c58949 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.382052968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e6b00fc-0fa4-4354-a79f-33362bfbf7e6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.382128972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e6b00fc-0fa4-4354-a79f-33362bfbf7e6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.382481716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04ef4fd91577506a5a1e3c63dca7660c357c1b3e5088df7d2b328b6cc4cd48a,PodSandboxId:f617655f1ffc9341a2eb78456b2f31fd893c3a071d5b1d80802d58823fcc309e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260592757550758,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b577293-6827-4c76-a404-6b53739ae6e9,},Annotations:map[string]string{io.kubernetes.container.hash: b287aae0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52dcd83ed085799e110ef5addf79202e933b852380cb2c894550861772f27194,PodSandboxId:5cba4a42b70da73f69c7016235e805e2de70979f3112456592dfddb73a72437d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591904081965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t29vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4d8867-523f-4115-b3dd-76a9e2765af1,},Annotations:map[string]string{io.kubernetes.container.hash: a47ee6a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f3e11d124588260a25997609789489e01d15234bd3481eddd4d4ebca0e0b97,PodSandboxId:7e0a4668e08d707cb73b7df3a75deddc957cb80f4bd9ede5c8a988c9cd6f93c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591763675527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zlz8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aecbb6c3-53d7-4497-a26f-c41a7795681a,},Annotations:map[string]string{io.kubernetes.container.hash: 9db8795,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6deaf42164b308b808f5f016359c422290a6f03a0cbf54cfda984c1545f973b8,PodSandboxId:b7f6a66020f31c30226795125159442b8fc2cdf4c3b1c2ef5cdf203b04fbafce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722260590931971079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfsk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952c235c-310b-4f82-ba2d-fe06f3556a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a30a1c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbfcefc4958f17f430033a2802d4d325efe3b9f6181f5523ef427df49277357,PodSandboxId:c5ef647f2c6a32fde8e2679b1b55354a6cec2c43a2935898b7026bcff29a2781,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260570544915087
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d36b6bf9ef24d0d4b5362b88fa5c3794,},Annotations:map[string]string{io.kubernetes.container.hash: bf329a7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c623732bc8fb3c27e7c14152546d83951379549a45125f67fb679aa871e404dc,PodSandboxId:2fe988c66f5636c3978b8549517cae3c2cc236900974fafae586223ad5263c93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172226057054
3171555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c62fc2ec64dd484d55d238738e3faa,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4531ab2ab82393edb07eda93f3e1dd12f3b081c74951d0a57d53d053bd33c4,PodSandboxId:e28425cea2e23d8bcdf4075256c3cea2c0a61efbfbf4a4f2d99b1b0fd46ca7bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172226
0570528201134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 978c0ae27d44e4f25f868861978552ab,},Annotations:map[string]string{io.kubernetes.container.hash: 25f21c04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0921a50fe257a83ea78e146ecf0c71b92c5779cbeb07ab4968f7c1c0c4c612,PodSandboxId:44d2533ee117cad6fc42ee59fb9d3efbb28f220cb2c9cfe7e4f32c62b8e97eec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260570481349335,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cda54b08904e20832fd8542849c6d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e6b00fc-0fa4-4354-a79f-33362bfbf7e6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.434090957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a33eeeea-aa5f-431a-b60e-2f512038480c name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.434184236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a33eeeea-aa5f-431a-b60e-2f512038480c name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.435666384Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7aae0279-2a12-4829-805e-8fbf27a6644c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.436077242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261137436053294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7aae0279-2a12-4829-805e-8fbf27a6644c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.437114455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=425381f2-213f-47f3-b4e3-16f7bb7a95f3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.437234626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=425381f2-213f-47f3-b4e3-16f7bb7a95f3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.437634720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04ef4fd91577506a5a1e3c63dca7660c357c1b3e5088df7d2b328b6cc4cd48a,PodSandboxId:f617655f1ffc9341a2eb78456b2f31fd893c3a071d5b1d80802d58823fcc309e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260592757550758,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b577293-6827-4c76-a404-6b53739ae6e9,},Annotations:map[string]string{io.kubernetes.container.hash: b287aae0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52dcd83ed085799e110ef5addf79202e933b852380cb2c894550861772f27194,PodSandboxId:5cba4a42b70da73f69c7016235e805e2de70979f3112456592dfddb73a72437d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591904081965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t29vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4d8867-523f-4115-b3dd-76a9e2765af1,},Annotations:map[string]string{io.kubernetes.container.hash: a47ee6a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f3e11d124588260a25997609789489e01d15234bd3481eddd4d4ebca0e0b97,PodSandboxId:7e0a4668e08d707cb73b7df3a75deddc957cb80f4bd9ede5c8a988c9cd6f93c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591763675527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zlz8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aecbb6c3-53d7-4497-a26f-c41a7795681a,},Annotations:map[string]string{io.kubernetes.container.hash: 9db8795,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6deaf42164b308b808f5f016359c422290a6f03a0cbf54cfda984c1545f973b8,PodSandboxId:b7f6a66020f31c30226795125159442b8fc2cdf4c3b1c2ef5cdf203b04fbafce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722260590931971079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfsk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952c235c-310b-4f82-ba2d-fe06f3556a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a30a1c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbfcefc4958f17f430033a2802d4d325efe3b9f6181f5523ef427df49277357,PodSandboxId:c5ef647f2c6a32fde8e2679b1b55354a6cec2c43a2935898b7026bcff29a2781,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260570544915087
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d36b6bf9ef24d0d4b5362b88fa5c3794,},Annotations:map[string]string{io.kubernetes.container.hash: bf329a7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c623732bc8fb3c27e7c14152546d83951379549a45125f67fb679aa871e404dc,PodSandboxId:2fe988c66f5636c3978b8549517cae3c2cc236900974fafae586223ad5263c93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172226057054
3171555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c62fc2ec64dd484d55d238738e3faa,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4531ab2ab82393edb07eda93f3e1dd12f3b081c74951d0a57d53d053bd33c4,PodSandboxId:e28425cea2e23d8bcdf4075256c3cea2c0a61efbfbf4a4f2d99b1b0fd46ca7bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172226
0570528201134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 978c0ae27d44e4f25f868861978552ab,},Annotations:map[string]string{io.kubernetes.container.hash: 25f21c04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0921a50fe257a83ea78e146ecf0c71b92c5779cbeb07ab4968f7c1c0c4c612,PodSandboxId:44d2533ee117cad6fc42ee59fb9d3efbb28f220cb2c9cfe7e4f32c62b8e97eec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260570481349335,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cda54b08904e20832fd8542849c6d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=425381f2-213f-47f3-b4e3-16f7bb7a95f3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.479462568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5828ce20-7c32-403b-b940-8f4aca66667a name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.479561217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5828ce20-7c32-403b-b940-8f4aca66667a name=/runtime.v1.RuntimeService/Version
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.480839874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=114df3f1-c8a7-445b-b55f-91091361e0d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.481504522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261137481466242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=114df3f1-c8a7-445b-b55f-91091361e0d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.482383736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ff34587-e57a-41da-a628-ceb80adca7fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.482473465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ff34587-e57a-41da-a628-ceb80adca7fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:52:17 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:52:17.482820565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04ef4fd91577506a5a1e3c63dca7660c357c1b3e5088df7d2b328b6cc4cd48a,PodSandboxId:f617655f1ffc9341a2eb78456b2f31fd893c3a071d5b1d80802d58823fcc309e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260592757550758,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b577293-6827-4c76-a404-6b53739ae6e9,},Annotations:map[string]string{io.kubernetes.container.hash: b287aae0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52dcd83ed085799e110ef5addf79202e933b852380cb2c894550861772f27194,PodSandboxId:5cba4a42b70da73f69c7016235e805e2de70979f3112456592dfddb73a72437d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591904081965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t29vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4d8867-523f-4115-b3dd-76a9e2765af1,},Annotations:map[string]string{io.kubernetes.container.hash: a47ee6a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f3e11d124588260a25997609789489e01d15234bd3481eddd4d4ebca0e0b97,PodSandboxId:7e0a4668e08d707cb73b7df3a75deddc957cb80f4bd9ede5c8a988c9cd6f93c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591763675527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zlz8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aecbb6c3-53d7-4497-a26f-c41a7795681a,},Annotations:map[string]string{io.kubernetes.container.hash: 9db8795,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6deaf42164b308b808f5f016359c422290a6f03a0cbf54cfda984c1545f973b8,PodSandboxId:b7f6a66020f31c30226795125159442b8fc2cdf4c3b1c2ef5cdf203b04fbafce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722260590931971079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfsk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952c235c-310b-4f82-ba2d-fe06f3556a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a30a1c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbfcefc4958f17f430033a2802d4d325efe3b9f6181f5523ef427df49277357,PodSandboxId:c5ef647f2c6a32fde8e2679b1b55354a6cec2c43a2935898b7026bcff29a2781,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260570544915087
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d36b6bf9ef24d0d4b5362b88fa5c3794,},Annotations:map[string]string{io.kubernetes.container.hash: bf329a7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c623732bc8fb3c27e7c14152546d83951379549a45125f67fb679aa871e404dc,PodSandboxId:2fe988c66f5636c3978b8549517cae3c2cc236900974fafae586223ad5263c93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172226057054
3171555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c62fc2ec64dd484d55d238738e3faa,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4531ab2ab82393edb07eda93f3e1dd12f3b081c74951d0a57d53d053bd33c4,PodSandboxId:e28425cea2e23d8bcdf4075256c3cea2c0a61efbfbf4a4f2d99b1b0fd46ca7bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172226
0570528201134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 978c0ae27d44e4f25f868861978552ab,},Annotations:map[string]string{io.kubernetes.container.hash: 25f21c04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0921a50fe257a83ea78e146ecf0c71b92c5779cbeb07ab4968f7c1c0c4c612,PodSandboxId:44d2533ee117cad6fc42ee59fb9d3efbb28f220cb2c9cfe7e4f32c62b8e97eec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260570481349335,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cda54b08904e20832fd8542849c6d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ff34587-e57a-41da-a628-ceb80adca7fd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f04ef4fd91577       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   f617655f1ffc9       storage-provisioner
	52dcd83ed0857       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   5cba4a42b70da       coredns-7db6d8ff4d-t29vc
	d4f3e11d12458       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   7e0a4668e08d7       coredns-7db6d8ff4d-zlz8m
	6deaf42164b30       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   b7f6a66020f31       kube-proxy-tfsk9
	ebbfcefc4958f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   c5ef647f2c6a3       kube-apiserver-default-k8s-diff-port-972693
	c623732bc8fb3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   2fe988c66f563       kube-controller-manager-default-k8s-diff-port-972693
	9f4531ab2ab82       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   e28425cea2e23       etcd-default-k8s-diff-port-972693
	be0921a50fe25       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   44d2533ee117c       kube-scheduler-default-k8s-diff-port-972693
	
	
	==> coredns [52dcd83ed085799e110ef5addf79202e933b852380cb2c894550861772f27194] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d4f3e11d124588260a25997609789489e01d15234bd3481eddd4d4ebca0e0b97] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-972693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-972693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=default-k8s-diff-port-972693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_42_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:42:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-972693
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:52:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:48:22 +0000   Mon, 29 Jul 2024 13:42:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:48:22 +0000   Mon, 29 Jul 2024 13:42:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:48:22 +0000   Mon, 29 Jul 2024 13:42:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:48:22 +0000   Mon, 29 Jul 2024 13:42:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.34
	  Hostname:    default-k8s-diff-port-972693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 77c483e8de6e4df683fb384254beda0d
	  System UUID:                77c483e8-de6e-4df6-83fb-384254beda0d
	  Boot ID:                    f19d25a4-acf7-4e59-ad71-5d597d39b42f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-t29vc                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-zlz8m                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-972693                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-972693             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-972693    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-tfsk9                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-972693             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-wwxmx                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  Starting                 9m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node default-k8s-diff-port-972693 event: Registered Node default-k8s-diff-port-972693 in Controller
	
	
	==> dmesg <==
	[  +0.044583] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.854880] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.509754] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.565067] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.640436] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.073909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065148] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.191585] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.121478] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.356058] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.686772] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.065461] kauditd_printk_skb: 130 callbacks suppressed
	[Jul29 13:38] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +5.649934] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.262408] kauditd_printk_skb: 84 callbacks suppressed
	[  +6.053209] kauditd_printk_skb: 2 callbacks suppressed
	[Jul29 13:42] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.590170] systemd-fstab-generator[3579]: Ignoring "noauto" option for root device
	[  +4.743682] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.814625] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[Jul29 13:43] systemd-fstab-generator[4122]: Ignoring "noauto" option for root device
	[  +0.133916] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 13:44] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [9f4531ab2ab82393edb07eda93f3e1dd12f3b081c74951d0a57d53d053bd33c4] <==
	{"level":"info","ts":"2024-07-29T13:42:50.884612Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T13:42:50.884815Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"41d396bb56c46004","initial-advertise-peer-urls":["https://192.168.50.34:2380"],"listen-peer-urls":["https://192.168.50.34:2380"],"advertise-client-urls":["https://192.168.50.34:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.34:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T13:42:50.884861Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T13:42:50.884963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41d396bb56c46004 switched to configuration voters=(4743300563910025220)"}
	{"level":"info","ts":"2024-07-29T13:42:50.885078Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ab9794b1ad75cdde","local-member-id":"41d396bb56c46004","added-peer-id":"41d396bb56c46004","added-peer-peer-urls":["https://192.168.50.34:2380"]}
	{"level":"info","ts":"2024-07-29T13:42:50.885161Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.34:2380"}
	{"level":"info","ts":"2024-07-29T13:42:50.885219Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.34:2380"}
	{"level":"info","ts":"2024-07-29T13:42:51.612367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41d396bb56c46004 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T13:42:51.612468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41d396bb56c46004 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T13:42:51.612517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41d396bb56c46004 received MsgPreVoteResp from 41d396bb56c46004 at term 1"}
	{"level":"info","ts":"2024-07-29T13:42:51.612546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41d396bb56c46004 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T13:42:51.612571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41d396bb56c46004 received MsgVoteResp from 41d396bb56c46004 at term 2"}
	{"level":"info","ts":"2024-07-29T13:42:51.612597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41d396bb56c46004 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T13:42:51.612626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 41d396bb56c46004 elected leader 41d396bb56c46004 at term 2"}
	{"level":"info","ts":"2024-07-29T13:42:51.61653Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"41d396bb56c46004","local-member-attributes":"{Name:default-k8s-diff-port-972693 ClientURLs:[https://192.168.50.34:2379]}","request-path":"/0/members/41d396bb56c46004/attributes","cluster-id":"ab9794b1ad75cdde","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:42:51.616688Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:42:51.617078Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:42:51.623335Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:42:51.623391Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T13:42:51.617345Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:42:51.62553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ab9794b1ad75cdde","local-member-id":"41d396bb56c46004","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:42:51.626461Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:42:51.630371Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:42:51.626887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.34:2379"}
	{"level":"info","ts":"2024-07-29T13:42:51.630568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:52:17 up 14 min,  0 users,  load average: 0.08, 0.18, 0.17
	Linux default-k8s-diff-port-972693 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ebbfcefc4958f17f430033a2802d4d325efe3b9f6181f5523ef427df49277357] <==
	I0729 13:46:13.112729       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:47:53.326750       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:47:53.326886       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 13:47:54.327887       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:47:54.327937       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:47:54.327946       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:47:54.328030       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:47:54.328082       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:47:54.329078       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:48:54.328921       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:48:54.328986       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:48:54.328994       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:48:54.330148       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:48:54.330211       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:48:54.330235       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:50:54.329480       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:50:54.329731       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:50:54.329760       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:50:54.330536       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:50:54.330636       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:50:54.331775       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c623732bc8fb3c27e7c14152546d83951379549a45125f67fb679aa871e404dc] <==
	I0729 13:46:40.222265       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:47:09.739655       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:47:10.231038       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:47:39.745444       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:47:40.239565       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:48:09.751111       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:48:10.249377       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:48:39.755195       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:48:40.262997       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:49:09.761195       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:49:10.271632       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 13:49:14.987575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="416.21µs"
	I0729 13:49:26.985639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="83.755µs"
	E0729 13:49:39.767730       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:49:40.282557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:50:09.773239       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:50:10.289362       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:50:39.779594       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:50:40.299373       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:51:09.785193       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:51:10.307957       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:51:39.791991       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:51:40.315688       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:52:09.797981       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:52:10.323073       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6deaf42164b308b808f5f016359c422290a6f03a0cbf54cfda984c1545f973b8] <==
	I0729 13:43:11.338116       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:43:11.362614       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.34"]
	I0729 13:43:11.482883       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:43:11.482932       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:43:11.482957       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:43:11.489752       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:43:11.490030       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:43:11.490072       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:43:11.494582       1 config.go:192] "Starting service config controller"
	I0729 13:43:11.494596       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:43:11.494622       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:43:11.494625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:43:11.495032       1 config.go:319] "Starting node config controller"
	I0729 13:43:11.495038       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:43:11.595399       1 shared_informer.go:320] Caches are synced for node config
	I0729 13:43:11.595447       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:43:11.595478       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [be0921a50fe257a83ea78e146ecf0c71b92c5779cbeb07ab4968f7c1c0c4c612] <==
	W0729 13:42:53.351024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 13:42:53.351036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 13:42:53.351107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:53.351135       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:42:53.351183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:42:53.351212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:42:53.351269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:53.351335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:42:53.351395       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:42:53.351408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 13:42:53.352038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 13:42:53.352075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 13:42:54.195063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:54.195144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 13:42:54.471045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:54.471145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:42:54.483466       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:42:54.483578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 13:42:54.580224       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 13:42:54.580403       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 13:42:54.628493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:54.628616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:42:54.639439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:54.639551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0729 13:42:56.632357       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:49:55 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:49:55.991651    3912 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:49:55 default-k8s-diff-port-972693 kubelet[3912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:49:55 default-k8s-diff-port-972693 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:49:55 default-k8s-diff-port-972693 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:49:55 default-k8s-diff-port-972693 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:50:03 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:50:03.973668    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:50:17 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:50:17.970748    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:50:32 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:50:32.970599    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:50:47 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:50:47.970829    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:50:55 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:50:55.991225    3912 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:50:55 default-k8s-diff-port-972693 kubelet[3912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:50:55 default-k8s-diff-port-972693 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:50:55 default-k8s-diff-port-972693 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:50:55 default-k8s-diff-port-972693 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:51:02 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:51:02.971497    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:51:17 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:51:17.971835    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:51:29 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:51:29.971002    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:51:43 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:51:43.972280    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:51:55 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:51:55.991735    3912 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:51:55 default-k8s-diff-port-972693 kubelet[3912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:51:55 default-k8s-diff-port-972693 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:51:55 default-k8s-diff-port-972693 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:51:55 default-k8s-diff-port-972693 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:51:57 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:51:57.970407    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:52:08 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:52:08.970973    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	
	
	==> storage-provisioner [f04ef4fd91577506a5a1e3c63dca7660c357c1b3e5088df7d2b328b6cc4cd48a] <==
	I0729 13:43:12.932344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 13:43:12.941923       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 13:43:12.941988       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 13:43:12.954405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 13:43:12.954587       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-972693_976e62ff-6aaa-417e-a6aa-e6b502bc4345!
	I0729 13:43:12.954711       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba8b4bdb-64f1-482b-bd84-282f3fe569f2", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-972693_976e62ff-6aaa-417e-a6aa-e6b502bc4345 became leader
	I0729 13:43:13.055150       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-972693_976e62ff-6aaa-417e-a6aa-e6b502bc4345!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-972693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-wwxmx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-972693 describe pod metrics-server-569cc877fc-wwxmx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-972693 describe pod metrics-server-569cc877fc-wwxmx: exit status 1 (71.284725ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-wwxmx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-972693 describe pod metrics-server-569cc877fc-wwxmx: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:46:29.057034  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:46:56.650501  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:47:00.677817  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:47:17.402435  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:47:18.313722  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:47:28.990845  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:47:30.929991  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:48:05.412010  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:48:19.697130  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:48:40.447569  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:48:46.439521  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:48:52.035443  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:49:27.880921  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 13:49:28.456575  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:50:06.009439  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:50:09.484823  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:50:37.633555  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:51:56.649740  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:52:28.991226  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:53:05.412030  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:53:46.439483  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:54:27.881283  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:55:06.010141  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-924039 -n old-k8s-version-924039
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 2 (232.001653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-924039" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 2 (225.602821ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-924039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-924039 logs -n 25: (1.58517921s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-507612 sudo cat                              | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo find                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo crio                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-507612                                       | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-312895 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | disable-driver-mounts-312895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:30 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-135920            | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-566777             | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-972693  | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-135920                 | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-566777                  | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-924039        | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-566777 --memory=2200                     | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-972693       | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC | 29 Jul 24 13:43 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-924039             | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:34:10
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:34:10.969228  301425 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:34:10.969348  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969356  301425 out.go:304] Setting ErrFile to fd 2...
	I0729 13:34:10.969360  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969506  301425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:34:10.970007  301425 out.go:298] Setting JSON to false
	I0729 13:34:10.970908  301425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11794,"bootTime":1722248257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:34:10.970971  301425 start.go:139] virtualization: kvm guest
	I0729 13:34:10.973245  301425 out.go:177] * [old-k8s-version-924039] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:34:10.974804  301425 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:34:10.974803  301425 notify.go:220] Checking for updates...
	I0729 13:34:10.977011  301425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:34:10.978270  301425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:34:10.979473  301425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:34:10.980743  301425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:34:10.981923  301425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:34:10.983514  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:34:10.983962  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:10.984049  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:10.998985  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0729 13:34:10.999407  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:10.999928  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:10.999951  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.000306  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.000497  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.002455  301425 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 13:34:11.003702  301425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:34:11.003997  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:11.004037  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:11.018707  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I0729 13:34:11.019177  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:11.019653  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:11.019676  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.019968  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.020126  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.055819  301425 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:34:11.057085  301425 start.go:297] selected driver: kvm2
	I0729 13:34:11.057104  301425 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.057242  301425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:34:11.057967  301425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.058029  301425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:34:11.073706  301425 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:34:11.074089  301425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:34:11.074169  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:34:11.074188  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:34:11.074240  301425 start.go:340] cluster config:
	{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.074366  301425 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.076296  301425 out.go:177] * Starting "old-k8s-version-924039" primary control-plane node in "old-k8s-version-924039" cluster
	I0729 13:34:09.149068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:11.077828  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:34:11.077869  301425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:34:11.077879  301425 cache.go:56] Caching tarball of preloaded images
	I0729 13:34:11.077959  301425 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:34:11.077970  301425 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 13:34:11.078069  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:34:11.078241  301425 start.go:360] acquireMachinesLock for old-k8s-version-924039: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:34:15.229067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:18.301058  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:24.381104  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:27.453064  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:33.533067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:36.605120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:42.685075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:45.757111  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:51.837033  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:54.909068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:00.989073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:04.061125  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:10.141082  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:13.213123  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:19.293109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:22.365061  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:28.445075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:31.517094  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:37.597080  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:40.669073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:46.749070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:49.821083  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:55.901013  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:58.973149  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:05.053098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:08.125109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:14.205093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:17.277093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:23.357105  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:26.429122  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:32.509070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:35.581107  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:41.661120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:44.733129  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:50.813085  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:53.885117  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:59.965073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:03.037079  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:09.117098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:12.189049  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:15.193505  300746 start.go:364] duration metric: took 4m36.683808785s to acquireMachinesLock for "no-preload-566777"
	I0729 13:37:15.193569  300746 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:15.193577  300746 fix.go:54] fixHost starting: 
	I0729 13:37:15.193937  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:15.193976  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:15.209623  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0729 13:37:15.210158  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:15.210625  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:37:15.210646  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:15.211001  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:15.211265  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:15.211468  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:37:15.213144  300746 fix.go:112] recreateIfNeeded on no-preload-566777: state=Stopped err=<nil>
	I0729 13:37:15.213185  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	W0729 13:37:15.213349  300746 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:15.215474  300746 out.go:177] * Restarting existing kvm2 VM for "no-preload-566777" ...
	I0729 13:37:15.190804  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:15.190850  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191224  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:37:15.191257  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191494  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:37:15.193354  300705 machine.go:97] duration metric: took 4m37.425774293s to provisionDockerMachine
	I0729 13:37:15.193407  300705 fix.go:56] duration metric: took 4m37.447841932s for fixHost
	I0729 13:37:15.193419  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 4m37.447869212s
	W0729 13:37:15.193447  300705 start.go:714] error starting host: provision: host is not running
	W0729 13:37:15.193569  300705 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 13:37:15.193581  300705 start.go:729] Will try again in 5 seconds ...
	I0729 13:37:15.216957  300746 main.go:141] libmachine: (no-preload-566777) Calling .Start
	I0729 13:37:15.217120  300746 main.go:141] libmachine: (no-preload-566777) Ensuring networks are active...
	I0729 13:37:15.217761  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network default is active
	I0729 13:37:15.218067  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network mk-no-preload-566777 is active
	I0729 13:37:15.218451  300746 main.go:141] libmachine: (no-preload-566777) Getting domain xml...
	I0729 13:37:15.219134  300746 main.go:141] libmachine: (no-preload-566777) Creating domain...
	I0729 13:37:16.412301  300746 main.go:141] libmachine: (no-preload-566777) Waiting to get IP...
	I0729 13:37:16.413162  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.413576  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.413670  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.413557  302040 retry.go:31] will retry after 233.512145ms: waiting for machine to come up
	I0729 13:37:16.649335  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.649921  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.649945  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.649885  302040 retry.go:31] will retry after 328.846738ms: waiting for machine to come up
	I0729 13:37:16.980566  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.980976  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.981022  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.980926  302040 retry.go:31] will retry after 329.69915ms: waiting for machine to come up
	I0729 13:37:17.312547  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.312948  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.312977  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.312906  302040 retry.go:31] will retry after 418.810733ms: waiting for machine to come up
	I0729 13:37:17.733615  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.734042  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.734065  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.734009  302040 retry.go:31] will retry after 694.191211ms: waiting for machine to come up
	I0729 13:37:20.196079  300705 start.go:360] acquireMachinesLock for embed-certs-135920: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:37:18.429670  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:18.430024  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:18.430055  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:18.429973  302040 retry.go:31] will retry after 857.66396ms: waiting for machine to come up
	I0729 13:37:19.289078  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:19.289491  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:19.289521  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:19.289458  302040 retry.go:31] will retry after 994.340261ms: waiting for machine to come up
	I0729 13:37:20.285875  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:20.286308  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:20.286340  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:20.286263  302040 retry.go:31] will retry after 1.052380852s: waiting for machine to come up
	I0729 13:37:21.340435  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:21.340775  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:21.340821  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:21.340743  302040 retry.go:31] will retry after 1.429700498s: waiting for machine to come up
	I0729 13:37:22.772362  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:22.772754  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:22.772782  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:22.772700  302040 retry.go:31] will retry after 1.702185495s: waiting for machine to come up
	I0729 13:37:24.477636  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:24.478074  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:24.478106  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:24.478003  302040 retry.go:31] will retry after 2.649912402s: waiting for machine to come up
	I0729 13:37:27.129797  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:27.130212  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:27.130243  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:27.130159  302040 retry.go:31] will retry after 3.079887428s: waiting for machine to come up
	I0729 13:37:30.213431  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:30.213918  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:30.213958  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:30.213875  302040 retry.go:31] will retry after 3.08003223s: waiting for machine to come up
	I0729 13:37:33.297139  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.297604  300746 main.go:141] libmachine: (no-preload-566777) Found IP for machine: 192.168.61.84
	I0729 13:37:33.297627  300746 main.go:141] libmachine: (no-preload-566777) Reserving static IP address...
	I0729 13:37:33.297639  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has current primary IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.298106  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.298146  300746 main.go:141] libmachine: (no-preload-566777) Reserved static IP address: 192.168.61.84
	I0729 13:37:33.298164  300746 main.go:141] libmachine: (no-preload-566777) DBG | skip adding static IP to network mk-no-preload-566777 - found existing host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"}
	I0729 13:37:33.298178  300746 main.go:141] libmachine: (no-preload-566777) DBG | Getting to WaitForSSH function...
	I0729 13:37:33.298194  300746 main.go:141] libmachine: (no-preload-566777) Waiting for SSH to be available...
	I0729 13:37:33.300310  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300618  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.300653  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300731  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH client type: external
	I0729 13:37:33.300773  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa (-rw-------)
	I0729 13:37:33.300826  300746 main.go:141] libmachine: (no-preload-566777) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:33.300957  300746 main.go:141] libmachine: (no-preload-566777) DBG | About to run SSH command:
	I0729 13:37:33.300985  300746 main.go:141] libmachine: (no-preload-566777) DBG | exit 0
	I0729 13:37:34.861481  301044 start.go:364] duration metric: took 4m23.064160625s to acquireMachinesLock for "default-k8s-diff-port-972693"
	I0729 13:37:34.861564  301044 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:34.861576  301044 fix.go:54] fixHost starting: 
	I0729 13:37:34.862021  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:34.862055  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:34.879106  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0729 13:37:34.879506  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:34.880050  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:37:34.880077  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:34.880423  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:34.880637  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:34.880838  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:37:34.882251  301044 fix.go:112] recreateIfNeeded on default-k8s-diff-port-972693: state=Stopped err=<nil>
	I0729 13:37:34.882284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	W0729 13:37:34.882465  301044 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:34.884611  301044 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-972693" ...
	I0729 13:37:33.420745  300746 main.go:141] libmachine: (no-preload-566777) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:33.421178  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetConfigRaw
	I0729 13:37:33.421861  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.424343  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.424680  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.424710  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.425061  300746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/config.json ...
	I0729 13:37:33.425244  300746 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:33.425262  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:33.425513  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.427708  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.427961  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.427989  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.428171  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.428354  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428528  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428672  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.428933  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.429139  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.429150  300746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:33.525027  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:33.525065  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525306  300746 buildroot.go:166] provisioning hostname "no-preload-566777"
	I0729 13:37:33.525340  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525551  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.528124  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528491  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.528529  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528677  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.528865  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529025  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529144  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.529286  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.529453  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.529465  300746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-566777 && echo "no-preload-566777" | sudo tee /etc/hostname
	I0729 13:37:33.638867  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-566777
	
	I0729 13:37:33.638902  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.641406  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641730  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.641762  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641908  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.642112  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642285  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642414  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.642555  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.642727  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.642743  300746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-566777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-566777/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-566777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:33.749760  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:33.749789  300746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:33.749812  300746 buildroot.go:174] setting up certificates
	I0729 13:37:33.749821  300746 provision.go:84] configureAuth start
	I0729 13:37:33.749831  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.750114  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.752924  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753241  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.753264  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753477  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.755385  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755681  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.755701  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755840  300746 provision.go:143] copyHostCerts
	I0729 13:37:33.755904  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:33.755926  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:33.756019  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:33.756156  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:33.756169  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:33.756206  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:33.756276  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:33.756286  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:33.756317  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:33.756380  300746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.no-preload-566777 san=[127.0.0.1 192.168.61.84 localhost minikube no-preload-566777]
	I0729 13:37:34.226953  300746 provision.go:177] copyRemoteCerts
	I0729 13:37:34.227033  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:34.227066  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.229542  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229816  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.229853  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.230177  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.230314  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.230452  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.310803  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:37:34.334545  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:37:34.357908  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:34.381163  300746 provision.go:87] duration metric: took 631.325967ms to configureAuth
	I0729 13:37:34.381200  300746 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:34.381441  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:37:34.381535  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.383985  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384286  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.384312  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384473  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.384681  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384862  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384995  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.385176  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.385393  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.385414  300746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:34.640587  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:34.640615  300746 machine.go:97] duration metric: took 1.215357318s to provisionDockerMachine
	I0729 13:37:34.640628  300746 start.go:293] postStartSetup for "no-preload-566777" (driver="kvm2")
	I0729 13:37:34.640645  300746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:34.640683  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.641067  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:34.641104  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.643711  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644066  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.644097  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644215  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.644398  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.644555  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.644677  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.723215  300746 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:34.727393  300746 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:34.727425  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:34.727507  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:34.727614  300746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:34.727770  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:34.736666  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:34.759678  300746 start.go:296] duration metric: took 119.034973ms for postStartSetup
	I0729 13:37:34.759716  300746 fix.go:56] duration metric: took 19.566140877s for fixHost
	I0729 13:37:34.759748  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.762103  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762468  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.762491  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762645  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.762843  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763008  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763111  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.763229  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.763392  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.763403  300746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:34.861306  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260254.835831305
	
	I0729 13:37:34.861333  300746 fix.go:216] guest clock: 1722260254.835831305
	I0729 13:37:34.861341  300746 fix.go:229] Guest: 2024-07-29 13:37:34.835831305 +0000 UTC Remote: 2024-07-29 13:37:34.759720831 +0000 UTC m=+296.387252495 (delta=76.110474ms)
	I0729 13:37:34.861376  300746 fix.go:200] guest clock delta is within tolerance: 76.110474ms
	I0729 13:37:34.861384  300746 start.go:83] releasing machines lock for "no-preload-566777", held for 19.66783585s
	I0729 13:37:34.861413  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.861708  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:34.864181  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864534  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.864567  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864757  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865296  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865467  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865546  300746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:34.865600  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.865726  300746 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:34.865753  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.868333  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868522  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868772  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868810  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868839  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868859  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868913  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869060  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869152  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869209  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869300  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869349  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869417  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.869551  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.970978  300746 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:34.978226  300746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:35.128653  300746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:35.134619  300746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:35.134688  300746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:35.150674  300746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:35.150697  300746 start.go:495] detecting cgroup driver to use...
	I0729 13:37:35.150762  300746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:35.166545  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:35.178859  300746 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:35.178913  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:35.197133  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:35.214430  300746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:35.337707  300746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:35.467057  300746 docker.go:233] disabling docker service ...
	I0729 13:37:35.467134  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:35.480960  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:35.493850  300746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:35.629455  300746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:35.741534  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:35.754886  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:35.773243  300746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 13:37:35.773323  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.783589  300746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:35.783673  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.794150  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.805389  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.816636  300746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:35.828027  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.838467  300746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.856470  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.866773  300746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:35.876110  300746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:35.876175  300746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:35.889768  300746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:35.909971  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:36.046023  300746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:36.192169  300746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:36.192238  300746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:36.197281  300746 start.go:563] Will wait 60s for crictl version
	I0729 13:37:36.197365  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.201359  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:36.248317  300746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:36.248420  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.276247  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.306549  300746 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 13:37:34.885944  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Start
	I0729 13:37:34.886114  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring networks are active...
	I0729 13:37:34.886856  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network default is active
	I0729 13:37:34.887211  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network mk-default-k8s-diff-port-972693 is active
	I0729 13:37:34.887684  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Getting domain xml...
	I0729 13:37:34.888427  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Creating domain...
	I0729 13:37:36.147265  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting to get IP...
	I0729 13:37:36.148095  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148547  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148616  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.148516  302181 retry.go:31] will retry after 191.117257ms: waiting for machine to come up
	I0729 13:37:36.340984  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341507  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.341444  302181 retry.go:31] will retry after 285.557329ms: waiting for machine to come up
	I0729 13:37:36.629066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629670  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.629621  302181 retry.go:31] will retry after 397.294163ms: waiting for machine to come up
	I0729 13:37:36.307930  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:36.311057  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311389  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:36.311417  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311699  300746 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:36.316257  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:36.330109  300746 kubeadm.go:883] updating cluster {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:36.330268  300746 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 13:37:36.330320  300746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:36.367218  300746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 13:37:36.367250  300746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:37:36.367327  300746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.367333  300746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.367394  300746 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 13:37:36.367404  300746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.367432  300746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.367353  300746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.367412  300746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.367743  300746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.369020  300746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.369125  300746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.369150  300746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.369203  300746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.369015  300746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.369484  300746 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 13:37:36.369609  300746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.369763  300746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.560256  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.600945  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.604476  300746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 13:37:36.604539  300746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.604592  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.606566  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 13:37:36.649109  300746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 13:37:36.649210  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.649212  300746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.649328  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.696863  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.698623  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.713816  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.727059  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.764110  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.764204  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.764208  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.784479  300746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 13:37:36.784542  300746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.784558  300746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 13:37:36.784597  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.784598  300746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.784694  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.813445  300746 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 13:37:36.813491  300746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.813544  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.825275  300746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 13:37:36.825290  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 13:37:36.825392  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825463  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825327  300746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.825515  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.852786  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.852866  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:36.852822  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.852843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.852984  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:37.587824  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:37.028009  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028349  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028378  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.028295  302181 retry.go:31] will retry after 507.597159ms: waiting for machine to come up
	I0729 13:37:37.538138  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538550  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538581  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.538507  302181 retry.go:31] will retry after 508.855087ms: waiting for machine to come up
	I0729 13:37:38.049628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050241  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050277  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.050198  302181 retry.go:31] will retry after 889.089993ms: waiting for machine to come up
	I0729 13:37:38.940541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941096  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.941009  302181 retry.go:31] will retry after 891.889885ms: waiting for machine to come up
	I0729 13:37:39.834956  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835395  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:39.835341  302181 retry.go:31] will retry after 1.030799215s: waiting for machine to come up
	I0729 13:37:40.867814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868336  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868367  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:40.868283  302181 retry.go:31] will retry after 1.40369357s: waiting for machine to come up
	I0729 13:37:38.870850  300746 ssh_runner.go:235] Completed: which crictl: (2.045307778s)
	I0729 13:37:38.870925  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:38.870921  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.045429354s)
	I0729 13:37:38.870946  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 13:37:38.871001  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0: (2.018116939s)
	I0729 13:37:38.871024  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.01808875s)
	I0729 13:37:38.871054  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871083  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.018080011s)
	I0729 13:37:38.871109  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 13:37:38.871120  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871056  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 13:37:38.871166  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871151  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871234  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0: (2.018278547s)
	I0729 13:37:38.871247  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:38.871259  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 13:37:38.871304  300746 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.283446632s)
	I0729 13:37:38.871343  300746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 13:37:38.871372  300746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:38.871406  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:38.871310  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:38.939395  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:38.939419  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 13:37:38.939532  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:40.939632  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.068434649s)
	I0729 13:37:40.939669  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 13:37:40.939693  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939702  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.068259157s)
	I0729 13:37:40.939734  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 13:37:40.939761  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939794  300746 ssh_runner.go:235] Completed: which crictl: (2.068372626s)
	I0729 13:37:40.939827  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.068564103s)
	I0729 13:37:40.939843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:40.939844  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.000295325s)
	I0729 13:37:40.939847  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 13:37:40.939856  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 13:37:40.999406  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 13:37:40.999505  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:43.015187  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.075399061s)
	I0729 13:37:43.015226  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 13:37:43.015243  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.015694914s)
	I0729 13:37:43.015259  300746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:43.015279  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 13:37:43.015313  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:42.273822  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274326  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:42.274251  302181 retry.go:31] will retry after 2.255017939s: waiting for machine to come up
	I0729 13:37:44.531432  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531845  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531873  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:44.531801  302181 retry.go:31] will retry after 2.272405743s: waiting for machine to come up
	I0729 13:37:46.401061  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.385713069s)
	I0729 13:37:46.401109  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 13:37:46.401147  300746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:46.401207  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:48.358628  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.9573934s)
	I0729 13:37:48.358659  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 13:37:48.358682  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:48.358733  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:46.806043  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806654  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806681  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:46.806599  302181 retry.go:31] will retry after 2.212726673s: waiting for machine to come up
	I0729 13:37:49.022244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022732  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022770  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:49.022677  302181 retry.go:31] will retry after 3.071460325s: waiting for machine to come up
	I0729 13:37:50.216727  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.857925776s)
	I0729 13:37:50.216769  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 13:37:50.216822  300746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.216879  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.862685  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 13:37:50.862738  300746 cache_images.go:123] Successfully loaded all cached images
	I0729 13:37:50.862746  300746 cache_images.go:92] duration metric: took 14.49548231s to LoadCachedImages
	I0729 13:37:50.862763  300746 kubeadm.go:934] updating node { 192.168.61.84 8443 v1.31.0-beta.0 crio true true} ...
	I0729 13:37:50.862924  300746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-566777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:50.863021  300746 ssh_runner.go:195] Run: crio config
	I0729 13:37:50.911526  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:50.911551  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:50.911563  300746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:50.911593  300746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.84 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-566777 NodeName:no-preload-566777 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:50.911782  300746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-566777"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:50.911856  300746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 13:37:50.922091  300746 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:50.922162  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:50.931275  300746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 13:37:50.947494  300746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 13:37:50.963108  300746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 13:37:50.979666  300746 ssh_runner.go:195] Run: grep 192.168.61.84	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:50.983215  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:50.994627  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:51.117275  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:51.134412  300746 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777 for IP: 192.168.61.84
	I0729 13:37:51.134439  300746 certs.go:194] generating shared ca certs ...
	I0729 13:37:51.134461  300746 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:51.134641  300746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:51.134692  300746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:51.134703  300746 certs.go:256] generating profile certs ...
	I0729 13:37:51.134825  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/client.key
	I0729 13:37:51.134901  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key.445c667e
	I0729 13:37:51.134962  300746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key
	I0729 13:37:51.135114  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:51.135153  300746 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:51.135166  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:51.135196  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:51.135225  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:51.135256  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:51.135309  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:51.136036  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:51.169507  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:51.201916  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:51.227860  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:51.263617  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:37:51.288105  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:37:51.314837  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:51.343892  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:37:51.367328  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:51.389470  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:51.411446  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:51.433270  300746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:51.448939  300746 ssh_runner.go:195] Run: openssl version
	I0729 13:37:51.454475  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:51.465080  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469541  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469605  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.475366  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:51.485979  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:51.496382  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500511  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500571  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.505997  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:51.516733  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:51.527637  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531754  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531797  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.537237  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:51.548006  300746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:51.552581  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:51.558414  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:51.563879  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:51.569869  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:51.575800  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:37:51.581525  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:37:51.587642  300746 kubeadm.go:392] StartCluster: {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:37:51.587777  300746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:37:51.587828  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.627118  300746 cri.go:89] found id: ""
	I0729 13:37:51.627212  300746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:37:51.637686  300746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:37:51.637711  300746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:37:51.637765  300746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:37:51.647368  300746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:37:51.648291  300746 kubeconfig.go:125] found "no-preload-566777" server: "https://192.168.61.84:8443"
	I0729 13:37:51.650296  300746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:37:51.659616  300746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.84
	I0729 13:37:51.659649  300746 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:37:51.659663  300746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:37:51.659714  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.700636  300746 cri.go:89] found id: ""
	I0729 13:37:51.700703  300746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:37:51.718225  300746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:37:51.728237  300746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:37:51.728257  300746 kubeadm.go:157] found existing configuration files:
	
	I0729 13:37:51.728303  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:37:51.738280  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:37:51.738364  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:37:51.748770  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:37:51.758572  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:37:51.758649  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:37:51.769634  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.779757  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:37:51.779827  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.790745  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:37:51.801212  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:37:51.801275  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:37:51.811706  300746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:37:51.821251  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:51.933905  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.401823  301425 start.go:364] duration metric: took 3m42.323534375s to acquireMachinesLock for "old-k8s-version-924039"
	I0729 13:37:53.401902  301425 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:53.401914  301425 fix.go:54] fixHost starting: 
	I0729 13:37:53.402310  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:53.402344  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:53.421973  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0729 13:37:53.422456  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:53.423079  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:37:53.423112  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:53.423508  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:53.423734  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:37:53.423883  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetState
	I0729 13:37:53.425687  301425 fix.go:112] recreateIfNeeded on old-k8s-version-924039: state=Stopped err=<nil>
	I0729 13:37:53.425733  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	W0729 13:37:53.425902  301425 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:53.427931  301425 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-924039" ...
	I0729 13:37:52.097443  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.097870  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Found IP for machine: 192.168.50.34
	I0729 13:37:52.097904  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserving static IP address...
	I0729 13:37:52.097923  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has current primary IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.098329  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.098357  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserved static IP address: 192.168.50.34
	I0729 13:37:52.098377  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | skip adding static IP to network mk-default-k8s-diff-port-972693 - found existing host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"}
	I0729 13:37:52.098406  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for SSH to be available...
	I0729 13:37:52.098423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Getting to WaitForSSH function...
	I0729 13:37:52.100530  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.100878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.100908  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.101029  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH client type: external
	I0729 13:37:52.101062  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa (-rw-------)
	I0729 13:37:52.101106  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:52.101121  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | About to run SSH command:
	I0729 13:37:52.101145  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | exit 0
	I0729 13:37:52.225041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:52.225381  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetConfigRaw
	I0729 13:37:52.226001  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.228722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229109  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.229140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229315  301044 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/config.json ...
	I0729 13:37:52.229522  301044 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:52.229541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:52.229716  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.231823  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.232181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.232446  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232613  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232758  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.232913  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.233100  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.233111  301044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:52.336948  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:52.336978  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337288  301044 buildroot.go:166] provisioning hostname "default-k8s-diff-port-972693"
	I0729 13:37:52.337321  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337552  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.340284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340598  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.340623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340724  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.340913  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341090  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341261  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.341419  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.341591  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.341603  301044 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-972693 && echo "default-k8s-diff-port-972693" | sudo tee /etc/hostname
	I0729 13:37:52.455264  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-972693
	
	I0729 13:37:52.455294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.457937  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458304  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.458332  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458465  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.458667  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458857  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458995  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.459170  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.459352  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.459376  301044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-972693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-972693/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-972693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:52.570543  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:52.570578  301044 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:52.570603  301044 buildroot.go:174] setting up certificates
	I0729 13:37:52.570617  301044 provision.go:84] configureAuth start
	I0729 13:37:52.570628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.570900  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.573309  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573609  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.573641  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573751  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.575826  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.576177  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576344  301044 provision.go:143] copyHostCerts
	I0729 13:37:52.576414  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:52.576483  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:52.576568  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:52.576698  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:52.576707  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:52.576728  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:52.576786  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:52.576815  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:52.576845  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:52.576902  301044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-972693 san=[127.0.0.1 192.168.50.34 default-k8s-diff-port-972693 localhost minikube]
	I0729 13:37:52.764928  301044 provision.go:177] copyRemoteCerts
	I0729 13:37:52.764988  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:52.765018  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.767540  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.767842  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.767872  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.768041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.768213  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.768362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.768474  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:52.847615  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:52.877666  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 13:37:52.901219  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:37:52.924922  301044 provision.go:87] duration metric: took 354.279838ms to configureAuth
	I0729 13:37:52.924953  301044 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:52.925157  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:37:52.925244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.927791  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.928181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.928533  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928830  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.928978  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.929208  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.929230  301044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:53.176359  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:53.176391  301044 machine.go:97] duration metric: took 946.853063ms to provisionDockerMachine
	I0729 13:37:53.176404  301044 start.go:293] postStartSetup for "default-k8s-diff-port-972693" (driver="kvm2")
	I0729 13:37:53.176419  301044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:53.176441  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.176782  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:53.176818  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.179340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.179698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179858  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.180053  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.180214  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.180336  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.259826  301044 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:53.264059  301044 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:53.264087  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:53.264155  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:53.264239  301044 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:53.264345  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:53.273954  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:53.297340  301044 start.go:296] duration metric: took 120.913486ms for postStartSetup
	I0729 13:37:53.297392  301044 fix.go:56] duration metric: took 18.435815853s for fixHost
	I0729 13:37:53.297421  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.299859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300187  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.300218  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.300576  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300755  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300932  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.301116  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:53.301314  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:53.301324  301044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:53.401628  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260273.369344581
	
	I0729 13:37:53.401671  301044 fix.go:216] guest clock: 1722260273.369344581
	I0729 13:37:53.401682  301044 fix.go:229] Guest: 2024-07-29 13:37:53.369344581 +0000 UTC Remote: 2024-07-29 13:37:53.297397345 +0000 UTC m=+281.644280810 (delta=71.947236ms)
	I0729 13:37:53.401705  301044 fix.go:200] guest clock delta is within tolerance: 71.947236ms
	I0729 13:37:53.401711  301044 start.go:83] releasing machines lock for "default-k8s-diff-port-972693", held for 18.540175489s
	I0729 13:37:53.401760  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.402061  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:53.404813  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405182  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.405207  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405359  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.405844  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406153  301044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:53.406210  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.406289  301044 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:53.406315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.409060  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409351  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409460  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.409814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.409878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409909  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409992  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410092  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.410183  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.410315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.410435  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410631  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.510289  301044 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:53.517635  301044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:53.660575  301044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:53.668128  301044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:53.668207  301044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:53.690732  301044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:53.690764  301044 start.go:495] detecting cgroup driver to use...
	I0729 13:37:53.690838  301044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:53.707461  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:53.721922  301044 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:53.722004  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:53.740941  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:53.759323  301044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:53.900344  301044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:54.065647  301044 docker.go:233] disabling docker service ...
	I0729 13:37:54.065780  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:54.082468  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:54.098283  301044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:54.213104  301044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:54.339560  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:54.360412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:54.384836  301044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:37:54.384900  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.400889  301044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:54.400980  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.416941  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.433090  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.449306  301044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:54.461742  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.477135  301044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.501431  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.519646  301044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:54.532995  301044 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:54.533074  301044 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:54.550639  301044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:54.561896  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:54.710789  301044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:54.885480  301044 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:54.885558  301044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:54.890556  301044 start.go:563] Will wait 60s for crictl version
	I0729 13:37:54.890629  301044 ssh_runner.go:195] Run: which crictl
	I0729 13:37:54.894644  301044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:54.941141  301044 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:54.941236  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:54.983380  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:55.027770  301044 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:37:53.429298  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .Start
	I0729 13:37:53.429471  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring networks are active...
	I0729 13:37:53.430263  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network default is active
	I0729 13:37:53.430649  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network mk-old-k8s-version-924039 is active
	I0729 13:37:53.431011  301425 main.go:141] libmachine: (old-k8s-version-924039) Getting domain xml...
	I0729 13:37:53.431825  301425 main.go:141] libmachine: (old-k8s-version-924039) Creating domain...
	I0729 13:37:54.749878  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting to get IP...
	I0729 13:37:54.751148  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.751716  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.751784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.751696  302377 retry.go:31] will retry after 230.330776ms: waiting for machine to come up
	I0729 13:37:54.984551  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.985138  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.985183  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.985094  302377 retry.go:31] will retry after 291.000555ms: waiting for machine to come up
	I0729 13:37:55.277730  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.278199  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.278220  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.278152  302377 retry.go:31] will retry after 360.474919ms: waiting for machine to come up
	I0729 13:37:55.640675  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.641255  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.641288  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.641207  302377 retry.go:31] will retry after 480.424143ms: waiting for machine to come up
	I0729 13:37:55.029239  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:55.032722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033225  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:55.033257  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033668  301044 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:55.038429  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:55.056198  301044 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:55.056373  301044 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:37:55.056440  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:55.100534  301044 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:37:55.100612  301044 ssh_runner.go:195] Run: which lz4
	I0729 13:37:55.105708  301044 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:37:55.110384  301044 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:37:55.110417  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:37:56.630726  301044 crio.go:462] duration metric: took 1.525047583s to copy over tarball
	I0729 13:37:56.630816  301044 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:37:53.446825  300746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.51288234s)
	I0729 13:37:53.446866  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.663105  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.740482  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.823641  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:37:53.823753  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.324001  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.824299  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.933931  300746 api_server.go:72] duration metric: took 1.11028623s to wait for apiserver process to appear ...
	I0729 13:37:54.933969  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:37:54.933996  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:54.934563  300746 api_server.go:269] stopped: https://192.168.61.84:8443/healthz: Get "https://192.168.61.84:8443/healthz": dial tcp 192.168.61.84:8443: connect: connection refused
	I0729 13:37:55.434598  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.005676  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.005719  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.005737  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.066371  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.066408  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.434268  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.439205  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.439240  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:58.934796  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.944368  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.944399  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.434576  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.443061  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:59.443098  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.934805  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.943892  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:37:59.955156  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:37:59.955185  300746 api_server.go:131] duration metric: took 5.021207326s to wait for apiserver health ...
	I0729 13:37:59.955197  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.955205  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:00.307264  300746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:37:56.123854  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.124460  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.124487  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.124433  302377 retry.go:31] will retry after 529.614291ms: waiting for machine to come up
	I0729 13:37:56.656136  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.656626  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.656657  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.656599  302377 retry.go:31] will retry after 794.429248ms: waiting for machine to come up
	I0729 13:37:57.452523  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:57.453001  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:57.453033  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:57.452952  302377 retry.go:31] will retry after 1.140583184s: waiting for machine to come up
	I0729 13:37:58.594636  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:58.595067  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:58.595109  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:58.595024  302377 retry.go:31] will retry after 894.563974ms: waiting for machine to come up
	I0729 13:37:59.491447  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:59.492094  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:59.492120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:59.491993  302377 retry.go:31] will retry after 1.145531829s: waiting for machine to come up
	I0729 13:38:00.639387  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:00.639807  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:00.639838  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:00.639754  302377 retry.go:31] will retry after 1.949675091s: waiting for machine to come up
	I0729 13:37:58.983188  301044 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.352336314s)
	I0729 13:37:58.983233  301044 crio.go:469] duration metric: took 2.352468802s to extract the tarball
	I0729 13:37:58.983245  301044 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:37:59.022539  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:59.086881  301044 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:37:59.086913  301044 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:37:59.086924  301044 kubeadm.go:934] updating node { 192.168.50.34 8444 v1.30.3 crio true true} ...
	I0729 13:37:59.087062  301044 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-972693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:59.087158  301044 ssh_runner.go:195] Run: crio config
	I0729 13:37:59.144128  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.144163  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:59.144182  301044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:59.144209  301044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.34 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-972693 NodeName:default-k8s-diff-port-972693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:59.144376  301044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.34
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-972693"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:59.144452  301044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:37:59.154648  301044 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:59.154717  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:59.164572  301044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0729 13:37:59.182967  301044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:37:59.202507  301044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0729 13:37:59.221603  301044 ssh_runner.go:195] Run: grep 192.168.50.34	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:59.226646  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:59.244199  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:59.390312  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:59.411152  301044 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693 for IP: 192.168.50.34
	I0729 13:37:59.411178  301044 certs.go:194] generating shared ca certs ...
	I0729 13:37:59.411213  301044 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:59.411421  301044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:59.411481  301044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:59.411495  301044 certs.go:256] generating profile certs ...
	I0729 13:37:59.411614  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/client.key
	I0729 13:37:59.411709  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key.0cff1f82
	I0729 13:37:59.411780  301044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key
	I0729 13:37:59.411977  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:59.412036  301044 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:59.412052  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:59.412090  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:59.412124  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:59.412156  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:59.412221  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:59.413262  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:59.450186  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:59.496339  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:59.535462  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:59.569433  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 13:37:59.602826  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:37:59.639581  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:59.672966  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:37:59.707007  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:59.741894  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:59.771364  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:59.802928  301044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:59.828730  301044 ssh_runner.go:195] Run: openssl version
	I0729 13:37:59.837356  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:59.855071  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861707  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861781  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.870815  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:59.884842  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:59.899473  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904238  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904312  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.910221  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:59.923542  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:59.936729  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943440  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943496  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.951099  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:59.964578  301044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:59.969476  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:59.975715  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:59.981719  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:59.987788  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:59.993753  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:00.000228  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:00.007898  301044 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:00.008033  301044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:00.008091  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.054999  301044 cri.go:89] found id: ""
	I0729 13:38:00.055097  301044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:00.069066  301044 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:00.069090  301044 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:00.069148  301044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:00.083486  301044 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:00.084538  301044 kubeconfig.go:125] found "default-k8s-diff-port-972693" server: "https://192.168.50.34:8444"
	I0729 13:38:00.086623  301044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:00.099514  301044 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.34
	I0729 13:38:00.099555  301044 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:00.099570  301044 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:00.099644  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.137643  301044 cri.go:89] found id: ""
	I0729 13:38:00.137726  301044 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:00.157036  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:00.168591  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:00.168614  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:00.168664  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:38:00.178379  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:00.178449  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:00.189688  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:38:00.199323  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:00.199388  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:00.209351  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.219100  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:00.219171  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.228754  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:38:00.238453  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:00.238526  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:00.248479  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:00.258717  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.377121  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.413128  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:00.424610  300746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:00.446537  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:01.601214  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:01.601265  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:01.601278  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:01.601296  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:01.601305  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:01.601312  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:38:01.601323  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:01.601332  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:01.601346  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:01.601357  300746 system_pods.go:74] duration metric: took 1.154789909s to wait for pod list to return data ...
	I0729 13:38:01.601370  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:02.057111  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:02.057149  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:02.057182  300746 node_conditions.go:105] duration metric: took 455.806302ms to run NodePressure ...
	I0729 13:38:02.057210  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.420014  300746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426444  300746 kubeadm.go:739] kubelet initialised
	I0729 13:38:02.426467  300746 kubeadm.go:740] duration metric: took 6.420611ms waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426478  300746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:02.431168  300746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.436892  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436916  300746 pod_ready.go:81] duration metric: took 5.728016ms for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.436925  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436932  300746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.443079  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443102  300746 pod_ready.go:81] duration metric: took 6.163444ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.443110  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443115  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.447945  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447964  300746 pod_ready.go:81] duration metric: took 4.843364ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.447973  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447980  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.457004  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457027  300746 pod_ready.go:81] duration metric: took 9.037058ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.457038  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457045  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.825208  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825246  300746 pod_ready.go:81] duration metric: took 368.180356ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.825259  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825268  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.225868  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.225975  300746 pod_ready.go:81] duration metric: took 400.697293ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.225993  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.226003  300746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.627568  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627605  300746 pod_ready.go:81] duration metric: took 401.589314ms for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.627618  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627628  300746 pod_ready.go:38] duration metric: took 1.201138036s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:03.627651  300746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:03.646855  300746 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:03.646893  300746 kubeadm.go:597] duration metric: took 12.009173344s to restartPrimaryControlPlane
	I0729 13:38:03.646910  300746 kubeadm.go:394] duration metric: took 12.059279913s to StartCluster
	I0729 13:38:03.646936  300746 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.647029  300746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:03.649213  300746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.649527  300746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:03.649810  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:38:03.649861  300746 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:03.649931  300746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-566777"
	I0729 13:38:03.649962  300746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-566777"
	W0729 13:38:03.649974  300746 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:03.650021  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650400  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.650428  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.650493  300746 addons.go:69] Setting default-storageclass=true in profile "no-preload-566777"
	I0729 13:38:03.650533  300746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-566777"
	I0729 13:38:03.650601  300746 addons.go:69] Setting metrics-server=true in profile "no-preload-566777"
	I0729 13:38:03.650631  300746 addons.go:234] Setting addon metrics-server=true in "no-preload-566777"
	W0729 13:38:03.650642  300746 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:03.650675  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650985  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651014  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651029  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651054  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651324  300746 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:03.652887  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:03.670088  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0729 13:38:03.670283  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0729 13:38:03.670694  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.670769  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.671418  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671423  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671437  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671440  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671755  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0729 13:38:03.671900  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.671927  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.672491  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.672515  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.672711  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.673183  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.673207  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.673468  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.673480  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.673857  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.674012  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.677726  300746 addons.go:234] Setting addon default-storageclass=true in "no-preload-566777"
	W0729 13:38:03.677746  300746 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:03.677777  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.678133  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.678151  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.692817  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0729 13:38:03.693446  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.693919  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.693945  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.694335  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.694504  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.694718  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0729 13:38:03.695225  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.695726  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.695744  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.696028  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.696154  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.696514  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.697635  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.698597  300746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:03.699466  300746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:03.700447  300746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:03.700463  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:03.700481  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.701375  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:03.701390  300746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:03.701404  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.705199  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705225  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705844  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705866  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705893  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705911  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705946  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706143  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706313  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.706471  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.706755  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.708988  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.710193  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I0729 13:38:03.710735  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.711282  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.711296  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.711684  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.712271  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.712322  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.712966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.713103  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.756710  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I0729 13:38:03.757254  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.757760  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.757784  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.758125  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.758376  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.760315  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.760577  300746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:03.760594  300746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:03.760612  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.763679  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.764208  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.764277  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.765045  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.765227  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.765386  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.765546  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.883257  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:03.905104  300746 node_ready.go:35] waiting up to 6m0s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:03.985382  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:03.985412  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:04.014094  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:04.014119  300746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:04.016390  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:04.047695  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:04.062249  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:04.062328  300746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:04.095999  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:05.473341  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4569173s)
	I0729 13:38:05.473396  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473409  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.473421  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.425688075s)
	I0729 13:38:05.473547  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473558  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474089  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.474117  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474129  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474133  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474137  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474142  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474158  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474148  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474213  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.475707  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.475738  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.475746  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.476002  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.476095  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.476124  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.490038  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.490081  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.490420  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.490440  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562064  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46596112s)
	I0729 13:38:05.562122  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562136  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.562492  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.562516  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562532  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562541  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.564397  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.564410  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.564448  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.564471  300746 addons.go:475] Verifying addon metrics-server=true in "no-preload-566777"
	I0729 13:38:05.566888  300746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:38:02.590640  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:02.591134  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:02.591162  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:02.591087  302377 retry.go:31] will retry after 1.765945358s: waiting for machine to come up
	I0729 13:38:04.358332  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:04.358934  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:04.358963  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:04.358899  302377 retry.go:31] will retry after 2.923224015s: waiting for machine to come up
	I0729 13:38:01.713425  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.33625836s)
	I0729 13:38:01.713462  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:01.941164  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.017707  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.134991  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:02.135105  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:02.636248  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.135563  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.264470  301044 api_server.go:72] duration metric: took 1.129485078s to wait for apiserver process to appear ...
	I0729 13:38:03.264512  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:03.264545  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.392570  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.392609  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.392626  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.423076  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.423120  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.764837  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.770393  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:06.770428  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.264879  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.269632  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:07.269670  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.764878  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.770291  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:38:07.781660  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:07.781691  301044 api_server.go:131] duration metric: took 4.517171532s to wait for apiserver health ...
	I0729 13:38:07.781700  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:38:07.781707  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:07.784769  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:05.568441  300746 addons.go:510] duration metric: took 1.918571396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:38:05.916109  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:07.284234  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:07.284764  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:07.284819  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:07.284694  302377 retry.go:31] will retry after 2.9786525s: waiting for machine to come up
	I0729 13:38:10.265771  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:10.266128  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:10.266161  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:10.266077  302377 retry.go:31] will retry after 5.044155966s: waiting for machine to come up
	I0729 13:38:07.786038  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:07.824838  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:07.850139  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:07.862900  301044 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:07.862932  301044 system_pods.go:61] "coredns-7db6d8ff4d-zllk5" [3ebb659a-7849-498b-a81c-54f75c8e1536] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:07.862943  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [fc5c7286-5cd4-4eeb-879e-6263f82c4164] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:07.862950  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [a3a13c0b-844d-4a5b-93a0-fb9784b4b095] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:07.862957  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4e6c469d-b2a5-4ec2-95a4-01b6ad7de347] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:07.862964  301044 system_pods.go:61] "kube-proxy-6hxkb" [42b01d8b-9a37-40d0-ac32-09e3e261f953] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:07.862979  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [2373a650-57bb-4dc3-96ab-7f6cd040c148] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:07.862985  301044 system_pods.go:61] "metrics-server-569cc877fc-dlrjb" [360087fa-273d-4ba8-a299-54678724c45e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:07.862990  301044 system_pods.go:61] "storage-provisioner" [3e3fb5ef-6761-4671-a093-8616241cd98f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:07.862996  301044 system_pods.go:74] duration metric: took 12.833023ms to wait for pod list to return data ...
	I0729 13:38:07.863007  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:07.868359  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:07.868385  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:07.868395  301044 node_conditions.go:105] duration metric: took 5.383164ms to run NodePressure ...
	I0729 13:38:07.868412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:08.166890  301044 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175546  301044 kubeadm.go:739] kubelet initialised
	I0729 13:38:08.175570  301044 kubeadm.go:740] duration metric: took 8.646638ms waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175588  301044 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.186944  301044 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.194446  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194479  301044 pod_ready.go:81] duration metric: took 7.500494ms for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.194487  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194495  301044 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.202341  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202366  301044 pod_ready.go:81] duration metric: took 7.863125ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.202380  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202388  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.209017  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209041  301044 pod_ready.go:81] duration metric: took 6.646309ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.209051  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209057  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.256503  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256530  301044 pod_ready.go:81] duration metric: took 47.465005ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.256543  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256552  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652875  301044 pod_ready.go:92] pod "kube-proxy-6hxkb" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:08.652901  301044 pod_ready.go:81] duration metric: took 396.340654ms for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652912  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.658352  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:08.411629  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:08.908602  300746 node_ready.go:49] node "no-preload-566777" has status "Ready":"True"
	I0729 13:38:08.908629  300746 node_ready.go:38] duration metric: took 5.003487604s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:08.908639  300746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.914468  300746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.921796  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.313102  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313621  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has current primary IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313650  301425 main.go:141] libmachine: (old-k8s-version-924039) Found IP for machine: 192.168.39.227
	I0729 13:38:15.313665  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserving static IP address...
	I0729 13:38:15.314120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.314168  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | skip adding static IP to network mk-old-k8s-version-924039 - found existing host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"}
	I0729 13:38:15.314187  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserved static IP address: 192.168.39.227
	I0729 13:38:15.314205  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting for SSH to be available...
	I0729 13:38:15.314219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Getting to WaitForSSH function...
	I0729 13:38:15.316468  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316779  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.316827  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316994  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH client type: external
	I0729 13:38:15.317013  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa (-rw-------)
	I0729 13:38:15.317042  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:15.317054  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | About to run SSH command:
	I0729 13:38:15.317076  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | exit 0
	I0729 13:38:15.444818  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:15.445203  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetConfigRaw
	I0729 13:38:15.445858  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.448296  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.448784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.448834  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.449028  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:38:15.449208  301425 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:15.449226  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:15.449469  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.451695  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452017  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.452046  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.452420  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452606  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452770  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.452945  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.453151  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.453165  301425 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:15.561558  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:15.561590  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.561859  301425 buildroot.go:166] provisioning hostname "old-k8s-version-924039"
	I0729 13:38:15.561887  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.562079  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.564776  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565116  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.565157  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565286  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.565495  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565669  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565805  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.565952  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.566129  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.566140  301425 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-924039 && echo "old-k8s-version-924039" | sudo tee /etc/hostname
	I0729 13:38:15.687712  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-924039
	
	I0729 13:38:15.687744  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.690289  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690614  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.690638  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690864  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.691104  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691290  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691463  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.691649  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.691841  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.691869  301425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-924039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-924039/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-924039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:15.814102  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:15.814140  301425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:15.814190  301425 buildroot.go:174] setting up certificates
	I0729 13:38:15.814198  301425 provision.go:84] configureAuth start
	I0729 13:38:15.814210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.814521  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.817140  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817548  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.817583  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817728  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.819957  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820307  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.820335  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820476  301425 provision.go:143] copyHostCerts
	I0729 13:38:15.820529  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:15.820539  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:15.820592  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:15.820685  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:15.820693  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:15.820713  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:15.820772  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:15.820779  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:15.820828  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:15.820909  301425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-924039 san=[127.0.0.1 192.168.39.227 localhost minikube old-k8s-version-924039]
	I0729 13:38:15.895797  301425 provision.go:177] copyRemoteCerts
	I0729 13:38:15.895866  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:15.895898  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.898774  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899173  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.899214  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899444  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.899672  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.899882  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.900048  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.606081  300705 start.go:364] duration metric: took 56.40993179s to acquireMachinesLock for "embed-certs-135920"
	I0729 13:38:16.606131  300705 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:38:16.606139  300705 fix.go:54] fixHost starting: 
	I0729 13:38:16.606611  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:16.606652  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:16.626502  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
	I0729 13:38:16.626989  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:16.627491  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:16.627511  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:16.627897  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:16.628100  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:16.628242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:16.629856  300705 fix.go:112] recreateIfNeeded on embed-certs-135920: state=Stopped err=<nil>
	I0729 13:38:16.629879  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	W0729 13:38:16.630046  300705 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:38:16.632177  300705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-135920" ...
	I0729 13:38:12.659133  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.159457  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.159792  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.159818  301044 pod_ready.go:81] duration metric: took 7.506898395s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.159827  301044 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.633625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Start
	I0729 13:38:16.633803  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring networks are active...
	I0729 13:38:16.634580  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network default is active
	I0729 13:38:16.634947  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network mk-embed-certs-135920 is active
	I0729 13:38:16.635454  300705 main.go:141] libmachine: (embed-certs-135920) Getting domain xml...
	I0729 13:38:16.636201  300705 main.go:141] libmachine: (embed-certs-135920) Creating domain...
	I0729 13:38:15.988091  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:16.019058  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 13:38:16.047266  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:16.072992  301425 provision.go:87] duration metric: took 258.777499ms to configureAuth
	I0729 13:38:16.073029  301425 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:16.073250  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:38:16.073338  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.075801  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.076219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076350  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.076560  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076750  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076972  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.077169  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.077354  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.077369  301425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:16.357614  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:16.357650  301425 machine.go:97] duration metric: took 908.424232ms to provisionDockerMachine
	I0729 13:38:16.357666  301425 start.go:293] postStartSetup for "old-k8s-version-924039" (driver="kvm2")
	I0729 13:38:16.357680  301425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:16.357706  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.358060  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:16.358089  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.360841  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361257  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.361314  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361410  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.361645  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.361821  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.361987  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.448673  301425 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:16.453435  301425 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:16.453461  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:16.453543  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:16.453638  301425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:16.453763  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:16.464185  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:16.490358  301425 start.go:296] duration metric: took 132.675687ms for postStartSetup
	I0729 13:38:16.490422  301425 fix.go:56] duration metric: took 23.088507704s for fixHost
	I0729 13:38:16.490450  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.493249  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493571  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.493612  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493781  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.494046  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494241  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494388  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.494561  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.494759  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.494769  301425 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:16.605903  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260296.583363181
	
	I0729 13:38:16.605930  301425 fix.go:216] guest clock: 1722260296.583363181
	I0729 13:38:16.605940  301425 fix.go:229] Guest: 2024-07-29 13:38:16.583363181 +0000 UTC Remote: 2024-07-29 13:38:16.490427183 +0000 UTC m=+245.556685019 (delta=92.935998ms)
	I0729 13:38:16.605967  301425 fix.go:200] guest clock delta is within tolerance: 92.935998ms
	I0729 13:38:16.605974  301425 start.go:83] releasing machines lock for "old-k8s-version-924039", held for 23.204101255s
	I0729 13:38:16.606006  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.606296  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:16.609324  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609669  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.609701  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609826  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610328  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610516  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610589  301425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:16.610673  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.610758  301425 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:16.610786  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.613356  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613639  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613689  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.613712  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613910  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614092  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.614112  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.614122  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614287  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614307  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614449  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.614496  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614635  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614771  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.719174  301425 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:16.726348  301425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:16.880130  301425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:16.886410  301425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:16.886484  301425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:16.904120  301425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:16.904151  301425 start.go:495] detecting cgroup driver to use...
	I0729 13:38:16.904222  301425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:16.927036  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:16.947380  301425 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:16.947448  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:16.964612  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:16.979266  301425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:17.108950  301425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:17.263118  301425 docker.go:233] disabling docker service ...
	I0729 13:38:17.263192  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:17.282563  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:17.299473  301425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:17.448598  301425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:17.568025  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:17.583700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:17.603159  301425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 13:38:17.603223  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.615655  301425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:17.615728  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.628639  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.640456  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.652160  301425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:17.663864  301425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:17.675293  301425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:17.675361  301425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:17.690427  301425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:17.702163  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:17.831401  301425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:17.985760  301425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:17.985851  301425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:17.990740  301425 start.go:563] Will wait 60s for crictl version
	I0729 13:38:17.990798  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:17.994741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:18.035793  301425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:18.035889  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.065036  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.097441  301425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 13:38:13.421995  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.944090  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.933596  300746 pod_ready.go:92] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.933621  300746 pod_ready.go:81] duration metric: took 8.019124005s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.933634  300746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943434  300746 pod_ready.go:92] pod "etcd-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.943465  300746 pod_ready.go:81] duration metric: took 9.816863ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943478  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952623  300746 pod_ready.go:92] pod "kube-apiserver-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.952644  300746 pod_ready.go:81] duration metric: took 9.157998ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952653  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.956989  300746 pod_ready.go:92] pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.957010  300746 pod_ready.go:81] duration metric: took 4.350015ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.957023  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962772  300746 pod_ready.go:92] pod "kube-proxy-ql6wf" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.962796  300746 pod_ready.go:81] duration metric: took 5.763769ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962807  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318604  300746 pod_ready.go:92] pod "kube-scheduler-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:17.318632  300746 pod_ready.go:81] duration metric: took 355.816982ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318642  300746 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:18.098840  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:18.102182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102629  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:18.102665  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102925  301425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:18.107544  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:18.122039  301425 kubeadm.go:883] updating cluster {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:18.122176  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:38:18.122249  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:18.169198  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:18.169279  301425 ssh_runner.go:195] Run: which lz4
	I0729 13:38:18.173861  301425 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:18.178840  301425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:18.178881  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 13:38:19.887360  301425 crio.go:462] duration metric: took 1.713549828s to copy over tarball
	I0729 13:38:19.887450  301425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:18.167033  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:20.168009  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:17.933984  300705 main.go:141] libmachine: (embed-certs-135920) Waiting to get IP...
	I0729 13:38:17.935033  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:17.935595  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:17.935652  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:17.935560  302586 retry.go:31] will retry after 195.331915ms: waiting for machine to come up
	I0729 13:38:18.133074  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.133566  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.133592  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.133513  302586 retry.go:31] will retry after 348.993714ms: waiting for machine to come up
	I0729 13:38:18.484164  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.484746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.484771  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.484703  302586 retry.go:31] will retry after 372.899167ms: waiting for machine to come up
	I0729 13:38:18.859212  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.859721  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.859746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.859672  302586 retry.go:31] will retry after 415.38859ms: waiting for machine to come up
	I0729 13:38:19.276241  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.276785  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.276816  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.276715  302586 retry.go:31] will retry after 553.262343ms: waiting for machine to come up
	I0729 13:38:19.831475  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.831994  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.832030  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.831949  302586 retry.go:31] will retry after 579.574559ms: waiting for machine to come up
	I0729 13:38:20.412838  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:20.413273  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:20.413302  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:20.413225  302586 retry.go:31] will retry after 908.712618ms: waiting for machine to come up
	I0729 13:38:21.324197  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:21.324824  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:21.324849  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:21.324723  302586 retry.go:31] will retry after 1.4226484s: waiting for machine to come up
	I0729 13:38:19.328753  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:21.330005  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.836067  301425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.948583188s)
	I0729 13:38:22.836104  301425 crio.go:469] duration metric: took 2.948710335s to extract the tarball
	I0729 13:38:22.836114  301425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:22.878370  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:22.921339  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:22.921370  301425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:38:22.921445  301425 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.921545  301425 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.921547  301425 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 13:38:22.921633  301425 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:22.921475  301425 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.921479  301425 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923052  301425 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 13:38:22.923712  301425 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.923723  301425 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923733  301425 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.923743  301425 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.923803  301425 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.923923  301425 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.923976  301425 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.079335  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.095210  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.096664  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.109172  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.111720  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.114386  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.200545  301425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 13:38:23.200629  301425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.200698  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.203884  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 13:38:23.261424  301425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 13:38:23.261500  301425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.261528  301425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 13:38:23.261561  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.261569  301425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.261610  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.267971  301425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 13:38:23.268018  301425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.268075  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317322  301425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 13:38:23.317369  301425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.317387  301425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 13:38:23.317422  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317441  301425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.317440  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.317489  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317507  301425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 13:38:23.317530  301425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 13:38:23.317551  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.317588  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.317553  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317683  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.322770  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.432764  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 13:38:23.432833  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 13:38:23.432877  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.442661  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 13:38:23.442741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 13:38:23.442785  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 13:38:23.442825  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 13:38:23.481401  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 13:38:23.484727  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 13:38:24.057020  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:24.203622  301425 cache_images.go:92] duration metric: took 1.282232497s to LoadCachedImages
	W0729 13:38:24.203724  301425 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 13:38:24.203742  301425 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.20.0 crio true true} ...
	I0729 13:38:24.203883  301425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-924039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:24.203996  301425 ssh_runner.go:195] Run: crio config
	I0729 13:38:24.274480  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:38:24.274531  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:24.274547  301425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:24.274582  301425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-924039 NodeName:old-k8s-version-924039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 13:38:24.274784  301425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-924039"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:24.274863  301425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 13:38:24.285241  301425 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:24.285333  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:24.294677  301425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0729 13:38:24.311572  301425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:24.328768  301425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 13:38:24.346849  301425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:24.351047  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:24.364302  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:24.502947  301425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:24.524583  301425 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039 for IP: 192.168.39.227
	I0729 13:38:24.524610  301425 certs.go:194] generating shared ca certs ...
	I0729 13:38:24.524626  301425 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:24.524831  301425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:24.524889  301425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:24.524908  301425 certs.go:256] generating profile certs ...
	I0729 13:38:24.525030  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.key
	I0729 13:38:24.525090  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b
	I0729 13:38:24.525143  301425 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key
	I0729 13:38:24.525300  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:24.525345  301425 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:24.525359  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:24.525390  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:24.525416  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:24.525440  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:24.525495  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:24.526416  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:24.593901  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:24.641443  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:24.679927  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:24.740839  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 13:38:24.779899  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:38:24.814327  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:24.842166  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:38:24.868619  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:24.894053  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:24.921437  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:24.947676  301425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:24.966469  301425 ssh_runner.go:195] Run: openssl version
	I0729 13:38:24.972780  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:24.985676  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990293  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990356  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.996523  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:25.007631  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:25.018369  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022779  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022840  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.028471  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:25.039307  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:25.050190  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054731  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054799  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.060568  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:25.071531  301425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:25.076195  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:25.082194  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:25.088573  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:25.095625  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:25.101900  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:25.107797  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:25.113775  301425 kubeadm.go:392] StartCluster: {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:25.113903  301425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:25.113975  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.159804  301425 cri.go:89] found id: ""
	I0729 13:38:25.159887  301425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:25.172248  301425 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:25.172271  301425 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:25.172321  301425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:25.182852  301425 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:25.184249  301425 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:25.186246  301425 kubeconfig.go:62] /home/jenkins/minikube-integration/19341-233093/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-924039" cluster setting kubeconfig missing "old-k8s-version-924039" context setting]
	I0729 13:38:25.188334  301425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:25.262355  301425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:25.274019  301425 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0729 13:38:25.274063  301425 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:25.274078  301425 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:25.274141  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.311295  301425 cri.go:89] found id: ""
	I0729 13:38:25.311365  301425 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:25.330380  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:25.343607  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:25.343651  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:25.343709  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:25.356979  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:25.357048  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:25.370453  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:25.386234  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:25.386308  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:25.403905  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.413906  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:25.414011  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.431532  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:25.448250  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:25.448325  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:25.459773  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:25.469841  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:25.584845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:22.667857  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:24.668022  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.748882  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:22.749346  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:22.749368  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:22.749292  302586 retry.go:31] will retry after 1.460248931s: waiting for machine to come up
	I0729 13:38:24.212019  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:24.212538  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:24.212567  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:24.212479  302586 retry.go:31] will retry after 1.462429402s: waiting for machine to come up
	I0729 13:38:25.676972  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:25.677407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:25.677429  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:25.677368  302586 retry.go:31] will retry after 2.551129627s: waiting for machine to come up
	I0729 13:38:23.826435  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:25.826981  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.325176  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:26.367294  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.618571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.775377  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.860948  301425 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:26.861038  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.361227  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.362003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.861172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.361165  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.861469  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.361306  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.861442  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.167961  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:29.667405  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.230763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:28.231276  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:28.231299  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:28.231239  302586 retry.go:31] will retry after 2.333059097s: waiting for machine to come up
	I0729 13:38:30.566386  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:30.566786  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:30.566815  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:30.566733  302586 retry.go:31] will retry after 3.717362174s: waiting for machine to come up
	I0729 13:38:30.326143  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:32.825635  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:31.361866  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:31.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.361776  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.862004  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.361883  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.862010  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.362013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.861958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.361390  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.861465  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.165082  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.165674  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.165885  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.288242  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288935  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has current primary IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288968  300705 main.go:141] libmachine: (embed-certs-135920) Found IP for machine: 192.168.72.207
	I0729 13:38:34.288987  300705 main.go:141] libmachine: (embed-certs-135920) Reserving static IP address...
	I0729 13:38:34.289557  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.289586  300705 main.go:141] libmachine: (embed-certs-135920) Reserved static IP address: 192.168.72.207
	I0729 13:38:34.289604  300705 main.go:141] libmachine: (embed-certs-135920) DBG | skip adding static IP to network mk-embed-certs-135920 - found existing host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"}
	I0729 13:38:34.289619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Getting to WaitForSSH function...
	I0729 13:38:34.289635  300705 main.go:141] libmachine: (embed-certs-135920) Waiting for SSH to be available...
	I0729 13:38:34.291951  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292308  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.292340  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292589  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH client type: external
	I0729 13:38:34.292619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa (-rw-------)
	I0729 13:38:34.292651  300705 main.go:141] libmachine: (embed-certs-135920) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:34.292665  300705 main.go:141] libmachine: (embed-certs-135920) DBG | About to run SSH command:
	I0729 13:38:34.292677  300705 main.go:141] libmachine: (embed-certs-135920) DBG | exit 0
	I0729 13:38:34.417738  300705 main.go:141] libmachine: (embed-certs-135920) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:34.418128  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetConfigRaw
	I0729 13:38:34.418881  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.421524  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.421875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.421911  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.422113  300705 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/config.json ...
	I0729 13:38:34.422306  300705 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:34.422325  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:34.422544  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.424658  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.425073  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425167  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.425365  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425575  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425786  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.425935  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.426155  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.426172  300705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:34.529324  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:34.529354  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529600  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:38:34.529625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.532564  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.532966  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.533001  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.533274  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.533502  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533701  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533906  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.534116  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.534339  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.534353  300705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-135920 && echo "embed-certs-135920" | sudo tee /etc/hostname
	I0729 13:38:34.651175  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-135920
	
	I0729 13:38:34.651203  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.653763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.654085  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654266  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.654460  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654647  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654838  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.655024  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.655230  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.655246  300705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-135920' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-135920/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-135920' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:34.769548  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:34.769579  300705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:34.769597  300705 buildroot.go:174] setting up certificates
	I0729 13:38:34.769605  300705 provision.go:84] configureAuth start
	I0729 13:38:34.769613  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.769910  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.772513  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.772833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.772859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.773005  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.775133  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775480  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.775506  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775607  300705 provision.go:143] copyHostCerts
	I0729 13:38:34.775671  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:34.775681  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:34.775738  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:34.775828  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:34.775836  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:34.775855  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:34.775909  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:34.775916  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:34.775932  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:34.775981  300705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.embed-certs-135920 san=[127.0.0.1 192.168.72.207 embed-certs-135920 localhost minikube]
	I0729 13:38:34.901161  300705 provision.go:177] copyRemoteCerts
	I0729 13:38:34.901230  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:34.901258  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.903730  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.904060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904245  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.904428  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.904606  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.904726  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:34.986647  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:35.010406  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:38:35.033884  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:35.057289  300705 provision.go:87] duration metric: took 287.670762ms to configureAuth
	I0729 13:38:35.057318  300705 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:35.057521  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:35.057621  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.060303  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060634  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.060667  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060840  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.061053  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061259  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061433  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.061599  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.061775  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.061792  300705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:35.344890  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:35.344923  300705 machine.go:97] duration metric: took 922.603779ms to provisionDockerMachine
	I0729 13:38:35.344936  300705 start.go:293] postStartSetup for "embed-certs-135920" (driver="kvm2")
	I0729 13:38:35.344947  300705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:35.344964  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.345304  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:35.345341  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.348029  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348420  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.348458  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348612  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.348832  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.348981  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.349112  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.431975  300705 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:35.436416  300705 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:35.436441  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:35.436522  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:35.436621  300705 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:35.436767  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:35.446166  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:35.473466  300705 start.go:296] duration metric: took 128.511199ms for postStartSetup
	I0729 13:38:35.473513  300705 fix.go:56] duration metric: took 18.867373858s for fixHost
	I0729 13:38:35.473540  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.476118  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476477  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.476504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476672  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.476877  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477093  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477241  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.477468  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.477642  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.477652  300705 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:35.577853  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260315.546644144
	
	I0729 13:38:35.577882  300705 fix.go:216] guest clock: 1722260315.546644144
	I0729 13:38:35.577892  300705 fix.go:229] Guest: 2024-07-29 13:38:35.546644144 +0000 UTC Remote: 2024-07-29 13:38:35.473518121 +0000 UTC m=+357.868969453 (delta=73.126023ms)
	I0729 13:38:35.577919  300705 fix.go:200] guest clock delta is within tolerance: 73.126023ms
	I0729 13:38:35.577926  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 18.971820448s
	I0729 13:38:35.577950  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.578260  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:35.581109  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581474  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.581507  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581707  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582287  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582451  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582562  300705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:35.582616  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.582645  300705 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:35.582673  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.585527  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585555  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585989  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586021  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586062  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586084  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586171  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586351  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586360  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586573  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586582  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586795  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586838  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.586942  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.686359  300705 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:35.692726  300705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:35.838487  300705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:35.844313  300705 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:35.844416  300705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:35.861079  300705 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:35.861103  300705 start.go:495] detecting cgroup driver to use...
	I0729 13:38:35.861178  300705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:35.880678  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:35.897996  300705 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:35.898070  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:35.915337  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:35.930990  300705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:36.039923  300705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:36.198255  300705 docker.go:233] disabling docker service ...
	I0729 13:38:36.198340  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:36.213373  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:36.227364  300705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:36.351279  300705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:36.468325  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:36.483692  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:36.503872  300705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:38:36.503945  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.515397  300705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:36.515502  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.527170  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.538668  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.550013  300705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:36.561402  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.573747  300705 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.594158  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.606047  300705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:36.616858  300705 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:36.616961  300705 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:36.633281  300705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:36.644423  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:36.779934  300705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:36.924394  300705 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:36.924483  300705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:36.929889  300705 start.go:563] Will wait 60s for crictl version
	I0729 13:38:36.929935  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:38:36.933671  300705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:36.973428  300705 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:36.973506  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.002245  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.034982  300705 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:38:37.036162  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:37.039092  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:37.039533  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039697  300705 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:37.044028  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:37.057278  300705 kubeadm.go:883] updating cluster {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:37.057398  300705 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:38:37.057504  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:37.096111  300705 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:38:37.096205  300705 ssh_runner.go:195] Run: which lz4
	I0729 13:38:37.100600  300705 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:37.104942  300705 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:37.104974  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:38:35.325849  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:37.326770  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.362042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:36.862022  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.361208  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.862020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.362115  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.861360  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.362077  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.861478  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.361278  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.861920  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.167072  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:40.667067  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:38.548671  300705 crio.go:462] duration metric: took 1.448103052s to copy over tarball
	I0729 13:38:38.548764  300705 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:40.801144  300705 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.252337742s)
	I0729 13:38:40.801177  300705 crio.go:469] duration metric: took 2.252468783s to extract the tarball
	I0729 13:38:40.801185  300705 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:40.840132  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:40.887424  300705 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:38:40.887447  300705 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:38:40.887456  300705 kubeadm.go:934] updating node { 192.168.72.207 8443 v1.30.3 crio true true} ...
	I0729 13:38:40.887583  300705 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-135920 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:40.887661  300705 ssh_runner.go:195] Run: crio config
	I0729 13:38:40.943732  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:40.943759  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:40.943771  300705 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:40.943801  300705 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.207 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-135920 NodeName:embed-certs-135920 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:38:40.943967  300705 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-135920"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:40.944048  300705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:38:40.954284  300705 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:40.954354  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:40.963877  300705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 13:38:40.981828  300705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:40.999273  300705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 13:38:41.016590  300705 ssh_runner.go:195] Run: grep 192.168.72.207	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:41.020149  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:41.031970  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:41.163779  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:41.181723  300705 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920 for IP: 192.168.72.207
	I0729 13:38:41.181746  300705 certs.go:194] generating shared ca certs ...
	I0729 13:38:41.181764  300705 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:41.181989  300705 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:41.182053  300705 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:41.182067  300705 certs.go:256] generating profile certs ...
	I0729 13:38:41.182191  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/client.key
	I0729 13:38:41.182257  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key.45ab1b35
	I0729 13:38:41.182306  300705 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key
	I0729 13:38:41.182454  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:41.182501  300705 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:41.182517  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:41.182553  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:41.182583  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:41.182607  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:41.182647  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:41.183522  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:41.239170  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:41.278086  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:41.318584  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:41.351639  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 13:38:41.389242  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:38:41.414897  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:41.439178  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:38:41.464278  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:41.488391  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:41.515271  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:41.539904  300705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:41.557036  300705 ssh_runner.go:195] Run: openssl version
	I0729 13:38:41.562935  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:41.580782  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585603  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585670  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.591504  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:41.602129  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:41.612441  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616813  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616866  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.622328  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:41.633108  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:41.643897  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648369  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648415  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.654085  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:41.665037  300705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:41.670067  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:41.676340  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:41.682386  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:41.688809  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:41.694957  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:41.700469  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:41.706471  300705 kubeadm.go:392] StartCluster: {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:41.706561  300705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:41.706617  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.746623  300705 cri.go:89] found id: ""
	I0729 13:38:41.746703  300705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:41.757101  300705 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:41.757121  300705 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:41.757174  300705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:41.766817  300705 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:41.767837  300705 kubeconfig.go:125] found "embed-certs-135920" server: "https://192.168.72.207:8443"
	I0729 13:38:41.770191  300705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:41.779930  300705 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.207
	I0729 13:38:41.779961  300705 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:41.779976  300705 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:41.780030  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.816273  300705 cri.go:89] found id: ""
	I0729 13:38:41.816350  300705 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:41.836512  300705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:41.847230  300705 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:41.847249  300705 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:41.847297  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:41.856215  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:41.856262  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:41.866646  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:41.876656  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:41.876723  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:41.886810  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.895693  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:41.895755  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.904774  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:41.915232  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:41.915301  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:41.924961  300705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:41.937051  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:42.059359  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:39.329415  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.826891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.361613  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:41.861155  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.361524  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.862047  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.361778  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.862055  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.861737  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.361194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.862019  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.326814  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:45.666203  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:42.934386  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.142119  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.221754  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.346345  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:43.346451  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.847275  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.347551  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.391680  300705 api_server.go:72] duration metric: took 1.045336573s to wait for apiserver process to appear ...
	I0729 13:38:44.391709  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:44.391735  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:44.392354  300705 api_server.go:269] stopped: https://192.168.72.207:8443/healthz: Get "https://192.168.72.207:8443/healthz": dial tcp 192.168.72.207:8443: connect: connection refused
	I0729 13:38:44.892773  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.149059  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.149101  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.149128  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.161645  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.161672  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.391878  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.396499  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.396527  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:47.892015  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.897406  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.897436  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:48.391867  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:48.395941  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:38:48.401926  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:48.401951  300705 api_server.go:131] duration metric: took 4.010234721s to wait for apiserver health ...
	I0729 13:38:48.401962  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:48.401970  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:48.403912  300705 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:44.073092  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:46.327011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:48.405332  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:48.416550  300705 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:48.439881  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:48.452435  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:48.452477  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:48.452527  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:48.452544  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:48.452556  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:48.452575  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:48.452584  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:48.452594  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:48.452604  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:48.452617  300705 system_pods.go:74] duration metric: took 12.710662ms to wait for pod list to return data ...
	I0729 13:38:48.452629  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:48.455453  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:48.455484  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:48.455497  300705 node_conditions.go:105] duration metric: took 2.858433ms to run NodePressure ...
	I0729 13:38:48.455518  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:48.791507  300705 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796191  300705 kubeadm.go:739] kubelet initialised
	I0729 13:38:48.796213  300705 kubeadm.go:740] duration metric: took 4.674843ms waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796222  300705 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:48.802395  300705 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.807224  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807247  300705 pod_ready.go:81] duration metric: took 4.825485ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.807263  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807269  300705 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.812485  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812516  300705 pod_ready.go:81] duration metric: took 5.235923ms for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.812529  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812536  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.817345  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817374  300705 pod_ready.go:81] duration metric: took 4.827847ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.817383  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817390  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.843709  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843754  300705 pod_ready.go:81] duration metric: took 26.35618ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.843775  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843783  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.243226  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243257  300705 pod_ready.go:81] duration metric: took 399.464753ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.243269  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243278  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.643370  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643399  300705 pod_ready.go:81] duration metric: took 400.112533ms for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.643410  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643416  300705 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:50.044089  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044119  300705 pod_ready.go:81] duration metric: took 400.694081ms for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:50.044128  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044135  300705 pod_ready.go:38] duration metric: took 1.247904039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:50.044153  300705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:50.055730  300705 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:50.055755  300705 kubeadm.go:597] duration metric: took 8.298625813s to restartPrimaryControlPlane
	I0729 13:38:50.055765  300705 kubeadm.go:394] duration metric: took 8.349303256s to StartCluster
	I0729 13:38:50.055785  300705 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.055869  300705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:50.057734  300705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.058013  300705 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:50.058092  300705 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:50.058165  300705 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-135920"
	I0729 13:38:50.058216  300705 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-135920"
	W0729 13:38:50.058230  300705 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:50.058217  300705 addons.go:69] Setting default-storageclass=true in profile "embed-certs-135920"
	I0729 13:38:50.058244  300705 addons.go:69] Setting metrics-server=true in profile "embed-certs-135920"
	I0729 13:38:50.058268  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058270  300705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-135920"
	I0729 13:38:50.058297  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:50.058305  300705 addons.go:234] Setting addon metrics-server=true in "embed-certs-135920"
	W0729 13:38:50.058350  300705 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:50.058416  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058719  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058746  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058763  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058766  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058732  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058835  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.061029  300705 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:50.062610  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:50.074642  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0729 13:38:50.074661  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0729 13:38:50.075119  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0729 13:38:50.075217  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075310  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075570  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075833  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.075856  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076049  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076066  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076273  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076367  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076393  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076434  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076620  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.076863  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.076912  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.076959  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.077488  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.077519  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.080392  300705 addons.go:234] Setting addon default-storageclass=true in "embed-certs-135920"
	W0729 13:38:50.080419  300705 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:50.080458  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.080872  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.080914  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.093352  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I0729 13:38:50.093981  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.094704  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.094742  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.095201  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.095452  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.095863  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0729 13:38:50.096287  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096506  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0729 13:38:50.096945  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096974  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.096991  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.097343  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.097408  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.097508  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.097529  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.099585  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.099600  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.099936  300705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:50.100730  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.100765  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.101377  300705 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.101399  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:50.101424  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.101563  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.103218  300705 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:46.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:46.862046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.362045  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.361183  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.862026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.361204  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.861490  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.361635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.861519  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.104927  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:50.104948  300705 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:50.104971  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.105309  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106036  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.106207  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106369  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.106615  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.106716  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.106817  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.108316  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.108859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108908  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.109081  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.109240  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.109354  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.119251  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0729 13:38:50.119703  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.120206  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.120235  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.120620  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.120813  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.122685  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.122898  300705 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.122910  300705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:50.122923  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.125412  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.125875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.125914  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.126140  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.126321  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.126448  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.126566  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.254664  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:50.276352  300705 node_ready.go:35] waiting up to 6m0s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:50.328315  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.412968  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.459653  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:50.459697  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:50.513203  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:50.513237  300705 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:50.576439  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.576469  300705 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:50.611994  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.701214  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701569  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.701636  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701647  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701657  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701663  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701909  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701936  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701939  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.707113  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.707130  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.707390  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.707407  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.707407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.625719  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212712139s)
	I0729 13:38:51.625766  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.625778  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626066  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.626109  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626117  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.626135  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.626143  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626412  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626430  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662030  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.049982518s)
	I0729 13:38:51.662094  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662110  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.662391  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.662759  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.662781  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662798  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.663076  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.663117  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.663126  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.663138  300705 addons.go:475] Verifying addon metrics-server=true in "embed-certs-135920"
	I0729 13:38:51.666005  300705 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 13:38:47.666568  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.167349  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.667365  300705 addons.go:510] duration metric: took 1.609276005s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 13:38:52.280219  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.826113  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.826826  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:53.327720  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:51.861510  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.362026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.861182  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.361850  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.861931  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.362035  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.861192  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.361173  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.862018  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.665875  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.666184  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.779805  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:56.780550  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:55.826349  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:58.326186  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:56.361740  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:56.862033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.362084  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.861406  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.861194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.361788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.861962  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.362043  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.862000  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.166551  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:59.167246  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.666773  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:57.780677  300705 node_ready.go:49] node "embed-certs-135920" has status "Ready":"True"
	I0729 13:38:57.780700  300705 node_ready.go:38] duration metric: took 7.504317897s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:57.780709  300705 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:57.786299  300705 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791107  300705 pod_ready.go:92] pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:57.791132  300705 pod_ready.go:81] duration metric: took 4.805712ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791143  300705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:59.806437  300705 pod_ready.go:102] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:00.296725  300705 pod_ready.go:92] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.296772  300705 pod_ready.go:81] duration metric: took 2.505622037s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.296782  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302450  300705 pod_ready.go:92] pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.302471  300705 pod_ready.go:81] duration metric: took 5.680644ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302482  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306734  300705 pod_ready.go:92] pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.306753  300705 pod_ready.go:81] duration metric: took 4.264085ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306762  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311745  300705 pod_ready.go:92] pod "kube-proxy-sn8bc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.311763  300705 pod_ready.go:81] duration metric: took 4.990061ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311773  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817465  300705 pod_ready.go:92] pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:01.817489  300705 pod_ready.go:81] duration metric: took 1.50570948s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817499  300705 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.825911  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.325485  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.362213  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:01.861107  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.361767  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.861151  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.361607  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.862013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.362032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.861858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.361611  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.862037  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.667047  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.166825  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.826817  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.326374  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:05.325891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:07.326167  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.362002  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:06.861635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.361659  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.862061  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.862083  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.361356  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.861763  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.361420  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.861822  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.666165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:10.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:08.824692  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.324207  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:09.326609  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.826082  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.362046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:11.861909  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.861834  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.361461  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.861666  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.861830  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.361141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.862003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.167800  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.665790  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:13.325286  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.826111  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:14.327217  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.826625  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.361731  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:16.862014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.361702  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.862141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.361808  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.361104  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.861123  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.361276  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.861176  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.666780  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.165629  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:18.328096  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.824426  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:19.326628  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.825705  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.362052  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:21.861150  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.361802  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.861996  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.362106  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.861135  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.361998  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.862048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.361848  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.861813  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.666434  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.666549  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:22.824988  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.825210  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.825579  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:23.826380  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:25.826544  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:27.826988  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:26.861651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:26.861733  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:26.904275  301425 cri.go:89] found id: ""
	I0729 13:39:26.904307  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.904315  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:26.904322  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:26.904387  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:26.946925  301425 cri.go:89] found id: ""
	I0729 13:39:26.946954  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.946966  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:26.946973  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:26.947036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:26.979236  301425 cri.go:89] found id: ""
	I0729 13:39:26.979267  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.979276  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:26.979282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:26.979330  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:27.022185  301425 cri.go:89] found id: ""
	I0729 13:39:27.022212  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.022220  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:27.022226  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:27.022277  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:27.055228  301425 cri.go:89] found id: ""
	I0729 13:39:27.055256  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.055266  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:27.055274  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:27.055335  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:27.088885  301425 cri.go:89] found id: ""
	I0729 13:39:27.088918  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.088926  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:27.088933  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:27.088986  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:27.123861  301425 cri.go:89] found id: ""
	I0729 13:39:27.123893  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.123902  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:27.123915  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:27.123967  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:27.157921  301425 cri.go:89] found id: ""
	I0729 13:39:27.157956  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.157964  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:27.157988  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:27.158003  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.222447  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:27.222489  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:27.265646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:27.265680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:27.317344  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:27.317388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:27.333664  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:27.333689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:27.460502  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:29.960703  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:29.974159  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:29.974235  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:30.009701  301425 cri.go:89] found id: ""
	I0729 13:39:30.009740  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.009753  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:30.009761  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:30.009822  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:30.045806  301425 cri.go:89] found id: ""
	I0729 13:39:30.045841  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.045853  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:30.045860  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:30.045924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:30.078709  301425 cri.go:89] found id: ""
	I0729 13:39:30.078738  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.078747  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:30.078753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:30.078808  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:30.112884  301425 cri.go:89] found id: ""
	I0729 13:39:30.112920  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.112932  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:30.112943  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:30.113012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:30.148160  301425 cri.go:89] found id: ""
	I0729 13:39:30.148196  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.148208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:30.148217  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:30.148285  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:30.186939  301425 cri.go:89] found id: ""
	I0729 13:39:30.186967  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.186975  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:30.186981  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:30.187039  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:30.241888  301425 cri.go:89] found id: ""
	I0729 13:39:30.241915  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.241926  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:30.241934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:30.242009  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:30.281482  301425 cri.go:89] found id: ""
	I0729 13:39:30.281510  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.281518  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:30.281527  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:30.281540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:30.321688  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:30.321730  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:30.378464  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:30.378508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:30.394109  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:30.394150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:30.474077  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:30.474101  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:30.474118  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.166322  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.166623  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.666142  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.323534  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.324750  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:30.327219  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:32.826011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.046016  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:33.059705  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:33.059795  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:33.096521  301425 cri.go:89] found id: ""
	I0729 13:39:33.096549  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.096557  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:33.096564  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:33.096621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:33.131262  301425 cri.go:89] found id: ""
	I0729 13:39:33.131295  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.131307  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:33.131314  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:33.131378  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:33.168889  301425 cri.go:89] found id: ""
	I0729 13:39:33.168915  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.168925  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:33.168932  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:33.168994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:33.205513  301425 cri.go:89] found id: ""
	I0729 13:39:33.205547  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.205558  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:33.205567  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:33.205644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:33.247051  301425 cri.go:89] found id: ""
	I0729 13:39:33.247079  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.247087  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:33.247093  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:33.247149  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:33.279541  301425 cri.go:89] found id: ""
	I0729 13:39:33.279575  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.279587  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:33.279596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:33.279659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:33.314000  301425 cri.go:89] found id: ""
	I0729 13:39:33.314034  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.314046  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:33.314054  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:33.314117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:33.351363  301425 cri.go:89] found id: ""
	I0729 13:39:33.351390  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.351401  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:33.351412  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:33.351437  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:33.413509  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:33.413547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:33.428128  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:33.428165  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:33.495430  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:33.495461  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:33.495478  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:33.574060  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:33.574098  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:34.166133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.167919  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.823668  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.824684  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.326216  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826516  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.113561  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:36.126899  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:36.126965  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:36.163363  301425 cri.go:89] found id: ""
	I0729 13:39:36.163396  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.163407  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:36.163414  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:36.163473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:36.205215  301425 cri.go:89] found id: ""
	I0729 13:39:36.205243  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.205259  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:36.205267  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:36.205331  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:36.243166  301425 cri.go:89] found id: ""
	I0729 13:39:36.243220  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.243231  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:36.243239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:36.243295  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:36.280804  301425 cri.go:89] found id: ""
	I0729 13:39:36.280836  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.280845  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:36.280852  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:36.280903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:36.317291  301425 cri.go:89] found id: ""
	I0729 13:39:36.317320  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.317330  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:36.317337  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:36.317399  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:36.358111  301425 cri.go:89] found id: ""
	I0729 13:39:36.358145  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.358156  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:36.358164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:36.358229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:36.399407  301425 cri.go:89] found id: ""
	I0729 13:39:36.399440  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.399451  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:36.399459  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:36.399525  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:36.437876  301425 cri.go:89] found id: ""
	I0729 13:39:36.437904  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.437914  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:36.437926  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:36.437942  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:36.514464  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:36.514493  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:36.514511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:36.592036  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:36.592083  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:36.647650  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:36.647691  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:36.706890  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:36.706935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.226070  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:39.239313  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:39.239373  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:39.274158  301425 cri.go:89] found id: ""
	I0729 13:39:39.274191  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.274202  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:39.274210  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:39.274286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:39.308448  301425 cri.go:89] found id: ""
	I0729 13:39:39.308484  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.308492  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:39.308499  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:39.308563  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:39.347745  301425 cri.go:89] found id: ""
	I0729 13:39:39.347782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.347791  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:39.347798  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:39.347856  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:39.380649  301425 cri.go:89] found id: ""
	I0729 13:39:39.380679  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.380688  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:39.380696  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:39.380767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:39.415076  301425 cri.go:89] found id: ""
	I0729 13:39:39.415107  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.415115  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:39.415120  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:39.415170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:39.450749  301425 cri.go:89] found id: ""
	I0729 13:39:39.450782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.450793  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:39.450801  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:39.450864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:39.482148  301425 cri.go:89] found id: ""
	I0729 13:39:39.482175  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.482184  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:39.482190  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:39.482239  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:39.518558  301425 cri.go:89] found id: ""
	I0729 13:39:39.518588  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.518597  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:39.518608  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:39.518622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:39.555753  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:39.555786  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:39.606627  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:39.606661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.620359  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:39.620388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:39.690685  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:39.690711  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:39.690728  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:38.665446  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.666445  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826801  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.325166  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:39.827390  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.326038  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.271925  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:42.284365  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:42.284447  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:42.318966  301425 cri.go:89] found id: ""
	I0729 13:39:42.318998  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.319020  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:42.319028  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:42.319111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:42.354811  301425 cri.go:89] found id: ""
	I0729 13:39:42.354840  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.354854  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:42.354862  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:42.354917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:42.402524  301425 cri.go:89] found id: ""
	I0729 13:39:42.402557  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.402569  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:42.402577  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:42.402643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:42.460954  301425 cri.go:89] found id: ""
	I0729 13:39:42.460984  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.461001  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:42.461010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:42.461063  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:42.516849  301425 cri.go:89] found id: ""
	I0729 13:39:42.516880  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.516890  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:42.516898  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:42.516963  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:42.560289  301425 cri.go:89] found id: ""
	I0729 13:39:42.560316  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.560325  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:42.560332  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:42.560397  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:42.597798  301425 cri.go:89] found id: ""
	I0729 13:39:42.597829  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.597839  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:42.597847  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:42.597912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:42.633015  301425 cri.go:89] found id: ""
	I0729 13:39:42.633043  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.633059  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:42.633068  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:42.633080  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:42.711103  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:42.711126  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:42.711141  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:42.787459  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:42.787499  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:42.828965  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:42.829002  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:42.881702  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:42.881740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:45.396462  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:45.410766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:45.410859  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:45.445886  301425 cri.go:89] found id: ""
	I0729 13:39:45.445931  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.445943  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:45.445960  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:45.446023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:45.484293  301425 cri.go:89] found id: ""
	I0729 13:39:45.484326  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.484338  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:45.484346  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:45.484410  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:45.520209  301425 cri.go:89] found id: ""
	I0729 13:39:45.520237  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.520246  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:45.520252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:45.520300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:45.555671  301425 cri.go:89] found id: ""
	I0729 13:39:45.555702  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.555711  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:45.555717  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:45.555767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:45.594578  301425 cri.go:89] found id: ""
	I0729 13:39:45.594609  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.594618  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:45.594624  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:45.594685  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:45.631777  301425 cri.go:89] found id: ""
	I0729 13:39:45.631805  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.631817  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:45.631825  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:45.631881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:45.667163  301425 cri.go:89] found id: ""
	I0729 13:39:45.667189  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.667197  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:45.667203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:45.667258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:45.703393  301425 cri.go:89] found id: ""
	I0729 13:39:45.703434  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.703443  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:45.703454  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:45.703488  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:45.774424  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:45.774452  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:45.774472  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:45.857529  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:45.857586  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:45.899737  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:45.899775  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:45.952640  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:45.952685  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:42.666728  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.165982  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.825543  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.323544  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:47.323595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:44.825237  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:46.825276  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.467705  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:48.482292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:48.482380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:48.520146  301425 cri.go:89] found id: ""
	I0729 13:39:48.520181  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.520195  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:48.520204  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:48.520282  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:48.552623  301425 cri.go:89] found id: ""
	I0729 13:39:48.552654  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.552665  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:48.552672  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:48.552734  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:48.587254  301425 cri.go:89] found id: ""
	I0729 13:39:48.587290  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.587303  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:48.587309  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:48.587368  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:48.621045  301425 cri.go:89] found id: ""
	I0729 13:39:48.621076  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.621088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:48.621096  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:48.621160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:48.654117  301425 cri.go:89] found id: ""
	I0729 13:39:48.654151  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.654163  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:48.654171  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:48.654236  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:48.693108  301425 cri.go:89] found id: ""
	I0729 13:39:48.693149  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.693166  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:48.693173  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:48.693225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:48.733000  301425 cri.go:89] found id: ""
	I0729 13:39:48.733025  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.733033  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:48.733039  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:48.733088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:48.773761  301425 cri.go:89] found id: ""
	I0729 13:39:48.773789  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.773798  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:48.773807  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:48.773822  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:48.826655  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:48.826683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:48.840335  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:48.840364  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:48.913727  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:48.913754  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:48.913774  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:48.990196  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:48.990235  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:47.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.167105  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.667165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.324027  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.324146  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.825859  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.326299  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.533333  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:51.547115  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:51.547175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:51.583247  301425 cri.go:89] found id: ""
	I0729 13:39:51.583284  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.583292  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:51.583297  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:51.583350  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:51.618925  301425 cri.go:89] found id: ""
	I0729 13:39:51.618958  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.618969  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:51.618977  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:51.619036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:51.657099  301425 cri.go:89] found id: ""
	I0729 13:39:51.657132  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.657144  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:51.657151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:51.657210  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:51.695413  301425 cri.go:89] found id: ""
	I0729 13:39:51.695459  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.695471  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:51.695480  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:51.695553  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:51.731153  301425 cri.go:89] found id: ""
	I0729 13:39:51.731186  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.731198  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:51.731206  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:51.731271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:51.765662  301425 cri.go:89] found id: ""
	I0729 13:39:51.765716  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.765730  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:51.765740  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:51.765807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:51.800442  301425 cri.go:89] found id: ""
	I0729 13:39:51.800480  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.800491  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:51.800500  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:51.800562  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:51.844516  301425 cri.go:89] found id: ""
	I0729 13:39:51.844542  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.844551  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:51.844562  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:51.844580  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:51.896139  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:51.896176  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:51.910479  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:51.910511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:51.980025  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:51.980052  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:51.980071  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:52.054674  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:52.054717  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.596468  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:54.612233  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:54.612344  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:54.653506  301425 cri.go:89] found id: ""
	I0729 13:39:54.653547  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.653558  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:54.653565  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:54.653624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:54.696964  301425 cri.go:89] found id: ""
	I0729 13:39:54.697002  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.697015  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:54.697023  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:54.697088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:54.731165  301425 cri.go:89] found id: ""
	I0729 13:39:54.731196  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.731207  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:54.731214  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:54.731279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:54.774397  301425 cri.go:89] found id: ""
	I0729 13:39:54.774426  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.774437  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:54.774444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:54.774506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:54.813365  301425 cri.go:89] found id: ""
	I0729 13:39:54.813396  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.813408  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:54.813414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:54.813480  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:54.849936  301425 cri.go:89] found id: ""
	I0729 13:39:54.849962  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.849970  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:54.849980  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:54.850042  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:54.883979  301425 cri.go:89] found id: ""
	I0729 13:39:54.884007  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.884015  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:54.884021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:54.884087  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:54.919754  301425 cri.go:89] found id: ""
	I0729 13:39:54.919779  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.919787  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:54.919796  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:54.919817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:54.973082  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:54.973117  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:54.986534  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:54.986571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:55.055473  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:55.055499  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:55.055514  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:55.138278  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:55.138322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.166585  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:56.166714  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.824525  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.824559  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.825238  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.826464  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.826664  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.683818  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:57.698992  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:57.699070  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:57.742071  301425 cri.go:89] found id: ""
	I0729 13:39:57.742103  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.742113  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:57.742121  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:57.742185  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:57.777871  301425 cri.go:89] found id: ""
	I0729 13:39:57.777902  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.777911  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:57.777918  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:57.777975  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:57.817767  301425 cri.go:89] found id: ""
	I0729 13:39:57.817798  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.817809  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:57.817817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:57.817889  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:57.855608  301425 cri.go:89] found id: ""
	I0729 13:39:57.855634  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.855644  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:57.855651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:57.855714  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:57.891219  301425 cri.go:89] found id: ""
	I0729 13:39:57.891248  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.891258  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:57.891266  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:57.891336  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:57.926000  301425 cri.go:89] found id: ""
	I0729 13:39:57.926034  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.926045  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:57.926053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:57.926116  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:57.964935  301425 cri.go:89] found id: ""
	I0729 13:39:57.964962  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.964978  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:57.964985  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:57.965051  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:58.001363  301425 cri.go:89] found id: ""
	I0729 13:39:58.001393  301425 logs.go:276] 0 containers: []
	W0729 13:39:58.001405  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:58.001417  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:58.001434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:58.057551  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:58.057598  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:58.072162  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:58.072200  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:58.140533  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:58.140565  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:58.140582  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:58.227285  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:58.227330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:00.769075  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:00.783394  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:00.783471  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:00.831260  301425 cri.go:89] found id: ""
	I0729 13:40:00.831291  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.831301  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:00.831309  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:00.831370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:00.870017  301425 cri.go:89] found id: ""
	I0729 13:40:00.870045  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.870057  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:00.870065  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:00.870127  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:00.904691  301425 cri.go:89] found id: ""
	I0729 13:40:00.904728  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.904740  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:00.904748  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:00.904828  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:00.937221  301425 cri.go:89] found id: ""
	I0729 13:40:00.937249  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.937259  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:00.937265  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:00.937329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:58.167355  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.666837  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.824755  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.324616  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.325368  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.325689  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.326062  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.977961  301425 cri.go:89] found id: ""
	I0729 13:40:00.977991  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.978002  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:00.978010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:00.978104  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:01.014239  301425 cri.go:89] found id: ""
	I0729 13:40:01.014271  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.014283  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:01.014292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:01.014362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:01.050583  301425 cri.go:89] found id: ""
	I0729 13:40:01.050615  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.050630  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:01.050637  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:01.050696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:01.091599  301425 cri.go:89] found id: ""
	I0729 13:40:01.091624  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.091634  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:01.091643  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:01.091661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:01.146404  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:01.146445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:01.160327  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:01.160358  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:01.237120  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:01.237147  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:01.237162  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:01.321539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:01.321590  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:03.865268  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:03.879648  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:03.879724  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:03.915303  301425 cri.go:89] found id: ""
	I0729 13:40:03.915329  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.915338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:03.915344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:03.915403  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:03.951982  301425 cri.go:89] found id: ""
	I0729 13:40:03.952014  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.952023  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:03.952032  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:03.952099  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:03.989751  301425 cri.go:89] found id: ""
	I0729 13:40:03.989785  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.989796  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:03.989804  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:03.989870  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:04.026934  301425 cri.go:89] found id: ""
	I0729 13:40:04.026975  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.026988  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:04.026996  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:04.027059  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:04.064135  301425 cri.go:89] found id: ""
	I0729 13:40:04.064165  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.064175  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:04.064187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:04.064256  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:04.103080  301425 cri.go:89] found id: ""
	I0729 13:40:04.103108  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.103117  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:04.103123  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:04.103172  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:04.143370  301425 cri.go:89] found id: ""
	I0729 13:40:04.143403  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.143414  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:04.143422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:04.143491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:04.179251  301425 cri.go:89] found id: ""
	I0729 13:40:04.179286  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.179298  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:04.179311  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:04.179330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:04.261058  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:04.261089  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:04.261111  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:04.342897  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:04.342935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:04.391504  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:04.391532  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:04.443064  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:04.443106  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:03.166195  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:05.166660  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.824882  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:07.324346  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.326236  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.825685  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.959346  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:06.974377  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:06.974444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:07.007797  301425 cri.go:89] found id: ""
	I0729 13:40:07.007834  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.007847  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:07.007856  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:07.007924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:07.042707  301425 cri.go:89] found id: ""
	I0729 13:40:07.042741  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.042749  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:07.042755  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:07.042807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:07.080150  301425 cri.go:89] found id: ""
	I0729 13:40:07.080185  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.080196  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:07.080203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:07.080268  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:07.115740  301425 cri.go:89] found id: ""
	I0729 13:40:07.115777  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.115788  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:07.115796  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:07.115888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:07.154110  301425 cri.go:89] found id: ""
	I0729 13:40:07.154141  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.154151  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:07.154158  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:07.154225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:07.190819  301425 cri.go:89] found id: ""
	I0729 13:40:07.190850  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.190858  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:07.190865  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:07.190917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:07.231530  301425 cri.go:89] found id: ""
	I0729 13:40:07.231560  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.231571  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:07.231579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:07.231643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:07.272211  301425 cri.go:89] found id: ""
	I0729 13:40:07.272240  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.272247  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:07.272257  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:07.272269  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.326673  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:07.326704  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:07.341255  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:07.341282  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:07.409850  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:07.409878  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:07.409895  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:07.493105  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:07.493169  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.033906  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:10.047938  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:10.048018  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:10.084224  301425 cri.go:89] found id: ""
	I0729 13:40:10.084251  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.084259  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:10.084265  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:10.084316  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:10.120362  301425 cri.go:89] found id: ""
	I0729 13:40:10.120398  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.120409  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:10.120417  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:10.120484  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:10.154128  301425 cri.go:89] found id: ""
	I0729 13:40:10.154160  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.154170  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:10.154178  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:10.154243  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:10.189539  301425 cri.go:89] found id: ""
	I0729 13:40:10.189574  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.189588  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:10.189596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:10.189661  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:10.228821  301425 cri.go:89] found id: ""
	I0729 13:40:10.228855  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.228867  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:10.228875  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:10.228950  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:10.274726  301425 cri.go:89] found id: ""
	I0729 13:40:10.274758  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.274769  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:10.274776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:10.274845  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:10.308910  301425 cri.go:89] found id: ""
	I0729 13:40:10.308945  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.308956  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:10.308964  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:10.309030  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:10.346008  301425 cri.go:89] found id: ""
	I0729 13:40:10.346044  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.346056  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:10.346069  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:10.346091  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:10.360541  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:10.360581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:10.433763  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:10.433788  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:10.433802  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:10.520366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:10.520418  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.561482  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:10.561512  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.668816  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:10.166833  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:09.823429  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.824033  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:08.826798  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.326762  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.327128  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.114858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:13.128348  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:13.128425  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:13.165329  301425 cri.go:89] found id: ""
	I0729 13:40:13.165359  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.165370  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:13.165377  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:13.165441  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:13.200104  301425 cri.go:89] found id: ""
	I0729 13:40:13.200135  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.200148  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:13.200155  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:13.200224  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:13.238632  301425 cri.go:89] found id: ""
	I0729 13:40:13.238680  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.238688  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:13.238694  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:13.238748  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:13.270859  301425 cri.go:89] found id: ""
	I0729 13:40:13.270892  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.270901  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:13.270907  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:13.270976  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:13.308346  301425 cri.go:89] found id: ""
	I0729 13:40:13.308378  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.308386  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:13.308392  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:13.308444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:13.346286  301425 cri.go:89] found id: ""
	I0729 13:40:13.346319  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.346331  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:13.346339  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:13.346412  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:13.383699  301425 cri.go:89] found id: ""
	I0729 13:40:13.383736  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.383769  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:13.383791  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:13.383850  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:13.419958  301425 cri.go:89] found id: ""
	I0729 13:40:13.420045  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.420058  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:13.420071  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:13.420094  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:13.473984  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:13.474028  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:13.488376  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:13.488410  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:13.559515  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:13.559543  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:13.559560  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:13.640528  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:13.640570  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:12.665799  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.666662  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.668217  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.323746  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.323961  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:15.826422  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.326284  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.189581  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:16.203962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:16.204052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:16.240537  301425 cri.go:89] found id: ""
	I0729 13:40:16.240572  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.240583  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:16.240591  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:16.240659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:16.277060  301425 cri.go:89] found id: ""
	I0729 13:40:16.277099  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.277112  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:16.277123  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:16.277200  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:16.313839  301425 cri.go:89] found id: ""
	I0729 13:40:16.313869  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.313878  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:16.313884  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:16.313935  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:16.351806  301425 cri.go:89] found id: ""
	I0729 13:40:16.351840  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.351850  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:16.351858  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:16.351922  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:16.387122  301425 cri.go:89] found id: ""
	I0729 13:40:16.387158  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.387169  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:16.387176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:16.387242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:16.424180  301425 cri.go:89] found id: ""
	I0729 13:40:16.424209  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.424220  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:16.424229  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:16.424292  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:16.461827  301425 cri.go:89] found id: ""
	I0729 13:40:16.461865  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.461879  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:16.461889  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:16.461946  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:16.510198  301425 cri.go:89] found id: ""
	I0729 13:40:16.510230  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.510238  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:16.510248  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:16.510264  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:16.585378  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:16.585420  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:16.629304  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:16.629337  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:16.682386  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:16.682434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:16.698405  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:16.698436  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:16.770281  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.270551  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:19.284543  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:19.284617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:19.325194  301425 cri.go:89] found id: ""
	I0729 13:40:19.325221  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.325231  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:19.325238  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:19.325298  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:19.362007  301425 cri.go:89] found id: ""
	I0729 13:40:19.362038  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.362058  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:19.362066  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:19.362196  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:19.401162  301425 cri.go:89] found id: ""
	I0729 13:40:19.401191  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.401202  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:19.401210  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:19.401274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:19.434652  301425 cri.go:89] found id: ""
	I0729 13:40:19.434689  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.434700  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:19.434709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:19.434774  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:19.470116  301425 cri.go:89] found id: ""
	I0729 13:40:19.470149  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.470157  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:19.470164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:19.470218  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:19.503593  301425 cri.go:89] found id: ""
	I0729 13:40:19.503621  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.503629  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:19.503635  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:19.503696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:19.546127  301425 cri.go:89] found id: ""
	I0729 13:40:19.546155  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.546164  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:19.546169  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:19.546217  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:19.584600  301425 cri.go:89] found id: ""
	I0729 13:40:19.584639  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.584650  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:19.584663  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:19.584681  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:19.599411  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:19.599446  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:19.665811  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.665836  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:19.665853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:19.747295  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:19.747339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:19.790476  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:19.790516  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:18.669004  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.166437  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.824788  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.327093  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:20.825470  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.827651  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.346725  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:22.361349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:22.361443  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:22.394840  301425 cri.go:89] found id: ""
	I0729 13:40:22.394870  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.394881  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:22.394889  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:22.394956  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:22.429328  301425 cri.go:89] found id: ""
	I0729 13:40:22.429356  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.429364  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:22.429370  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:22.429431  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:22.463179  301425 cri.go:89] found id: ""
	I0729 13:40:22.463206  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.463214  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:22.463220  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:22.463291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:22.497527  301425 cri.go:89] found id: ""
	I0729 13:40:22.497557  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.497565  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:22.497571  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:22.497627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:22.537607  301425 cri.go:89] found id: ""
	I0729 13:40:22.537635  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.537646  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:22.537654  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:22.537718  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:22.580658  301425 cri.go:89] found id: ""
	I0729 13:40:22.580689  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.580701  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:22.580709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:22.580775  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:22.622229  301425 cri.go:89] found id: ""
	I0729 13:40:22.622261  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.622270  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:22.622282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:22.622346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:22.660091  301425 cri.go:89] found id: ""
	I0729 13:40:22.660120  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.660129  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:22.660139  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:22.660153  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:22.715053  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:22.715090  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:22.728865  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:22.728898  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:22.805760  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:22.805785  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:22.805799  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:22.890915  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:22.890960  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:25.457272  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:25.471002  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:25.471088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:25.506190  301425 cri.go:89] found id: ""
	I0729 13:40:25.506226  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.506237  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:25.506244  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:25.506297  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:25.540957  301425 cri.go:89] found id: ""
	I0729 13:40:25.540991  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.541002  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:25.541011  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:25.541074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:25.578378  301425 cri.go:89] found id: ""
	I0729 13:40:25.578424  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.578440  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:25.578448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:25.578518  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:25.620930  301425 cri.go:89] found id: ""
	I0729 13:40:25.620962  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.620979  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:25.620987  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:25.621056  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:25.655558  301425 cri.go:89] found id: ""
	I0729 13:40:25.655589  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.655597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:25.655604  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:25.655670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:25.688810  301425 cri.go:89] found id: ""
	I0729 13:40:25.688845  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.688855  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:25.688863  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:25.688930  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:25.724384  301425 cri.go:89] found id: ""
	I0729 13:40:25.724416  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.724428  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:25.724435  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:25.724514  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:25.763174  301425 cri.go:89] found id: ""
	I0729 13:40:25.763200  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.763209  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:25.763219  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:25.763232  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:25.818517  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:25.818569  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:25.833939  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:25.833973  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:25.910487  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:25.910515  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:25.910537  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:23.167028  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.666513  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:23.824183  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.827054  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.325894  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:27.824855  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.993887  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:25.993929  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:28.536843  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:28.550097  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:28.550175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:28.592664  301425 cri.go:89] found id: ""
	I0729 13:40:28.592697  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.592709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:28.592716  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:28.592788  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:28.638299  301425 cri.go:89] found id: ""
	I0729 13:40:28.638329  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.638337  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:28.638343  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:28.638395  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:28.682410  301425 cri.go:89] found id: ""
	I0729 13:40:28.682437  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.682446  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:28.682452  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:28.682511  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:28.719402  301425 cri.go:89] found id: ""
	I0729 13:40:28.719430  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.719438  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:28.719444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:28.719504  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:28.767515  301425 cri.go:89] found id: ""
	I0729 13:40:28.767547  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.767559  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:28.767568  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:28.767633  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:28.811600  301425 cri.go:89] found id: ""
	I0729 13:40:28.811632  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.811644  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:28.811652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:28.811727  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:28.853364  301425 cri.go:89] found id: ""
	I0729 13:40:28.853397  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.853407  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:28.853414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:28.853486  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:28.890981  301425 cri.go:89] found id: ""
	I0729 13:40:28.891013  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.891024  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:28.891035  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:28.891050  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:28.944174  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:28.944213  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:28.957724  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:28.957755  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:29.026457  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:29.026479  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:29.026497  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:29.105366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:29.105415  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:27.667251  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.166789  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:28.323476  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.324242  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:32.325477  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:29.825621  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.828363  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.649374  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:31.663432  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:31.663512  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:31.702047  301425 cri.go:89] found id: ""
	I0729 13:40:31.702080  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.702088  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:31.702098  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:31.702162  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:31.738484  301425 cri.go:89] found id: ""
	I0729 13:40:31.738510  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.738518  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:31.738524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:31.738583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:31.774214  301425 cri.go:89] found id: ""
	I0729 13:40:31.774249  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.774261  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:31.774270  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:31.774339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:31.810263  301425 cri.go:89] found id: ""
	I0729 13:40:31.810293  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.810302  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:31.810307  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:31.810369  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:31.848124  301425 cri.go:89] found id: ""
	I0729 13:40:31.848153  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.848160  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:31.848167  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:31.848234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:31.885531  301425 cri.go:89] found id: ""
	I0729 13:40:31.885561  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.885571  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:31.885580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:31.885650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:31.923904  301425 cri.go:89] found id: ""
	I0729 13:40:31.923939  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.923952  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:31.923959  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:31.924029  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:31.957165  301425 cri.go:89] found id: ""
	I0729 13:40:31.957202  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.957213  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:31.957228  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:31.957248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:32.039221  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:32.039262  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.078191  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:32.078229  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:32.131871  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:32.131922  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:32.146676  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:32.146706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:32.223849  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:34.724927  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:34.739029  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:34.739113  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:34.774627  301425 cri.go:89] found id: ""
	I0729 13:40:34.774660  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.774669  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:34.774675  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:34.774743  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:34.809840  301425 cri.go:89] found id: ""
	I0729 13:40:34.809872  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.809882  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:34.809887  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:34.809940  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:34.847530  301425 cri.go:89] found id: ""
	I0729 13:40:34.847561  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.847572  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:34.847580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:34.847648  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:34.881828  301425 cri.go:89] found id: ""
	I0729 13:40:34.881856  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.881870  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:34.881876  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:34.881937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:34.918903  301425 cri.go:89] found id: ""
	I0729 13:40:34.918937  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.918949  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:34.918956  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:34.919015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:34.954714  301425 cri.go:89] found id: ""
	I0729 13:40:34.954749  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.954761  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:34.954770  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:34.954825  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:34.993433  301425 cri.go:89] found id: ""
	I0729 13:40:34.993463  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.993472  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:34.993478  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:34.993531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:35.033830  301425 cri.go:89] found id: ""
	I0729 13:40:35.033859  301425 logs.go:276] 0 containers: []
	W0729 13:40:35.033874  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:35.033884  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:35.033900  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:35.084546  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:35.084595  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:35.098807  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:35.098845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:35.182636  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:35.182662  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:35.182674  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:35.262767  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:35.262808  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.665817  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.670805  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.823905  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.824232  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.326644  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.825977  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:37.802033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:37.815633  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:37.815697  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:37.857522  301425 cri.go:89] found id: ""
	I0729 13:40:37.857552  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.857563  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:37.857571  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:37.857627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:37.897527  301425 cri.go:89] found id: ""
	I0729 13:40:37.897564  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.897575  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:37.897583  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:37.897649  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.937135  301425 cri.go:89] found id: ""
	I0729 13:40:37.937167  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.937176  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:37.937189  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:37.937255  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:37.972699  301425 cri.go:89] found id: ""
	I0729 13:40:37.972734  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.972751  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:37.972761  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:37.972933  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:38.012702  301425 cri.go:89] found id: ""
	I0729 13:40:38.012732  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.012740  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:38.012747  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:38.012832  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:38.050228  301425 cri.go:89] found id: ""
	I0729 13:40:38.050260  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.050268  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:38.050275  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:38.050329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:38.084665  301425 cri.go:89] found id: ""
	I0729 13:40:38.084693  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.084707  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:38.084715  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:38.084780  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:38.119155  301425 cri.go:89] found id: ""
	I0729 13:40:38.119200  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.119211  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:38.119222  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:38.119236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:38.170934  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:38.170968  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:38.185298  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:38.185329  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:38.256118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:38.256149  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:38.256166  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:38.337090  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:38.337127  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:40.876177  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:40.889580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:40.889655  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:40.922971  301425 cri.go:89] found id: ""
	I0729 13:40:40.923002  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.923010  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:40.923016  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:40.923074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:40.955840  301425 cri.go:89] found id: ""
	I0729 13:40:40.955872  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.955884  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:40.955891  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:40.955952  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.165718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.166160  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.168344  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:38.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.324607  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.324996  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.344232  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:40.993258  301425 cri.go:89] found id: ""
	I0729 13:40:40.993290  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.993298  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:40.993305  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:40.993357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:41.026370  301425 cri.go:89] found id: ""
	I0729 13:40:41.026398  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.026409  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:41.026416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:41.026473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:41.060538  301425 cri.go:89] found id: ""
	I0729 13:40:41.060565  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.060574  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:41.060579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:41.060630  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:41.105074  301425 cri.go:89] found id: ""
	I0729 13:40:41.105108  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.105118  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:41.105126  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:41.105193  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:41.138254  301425 cri.go:89] found id: ""
	I0729 13:40:41.138280  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.138288  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:41.138294  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:41.138342  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:41.171432  301425 cri.go:89] found id: ""
	I0729 13:40:41.171458  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.171466  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:41.171475  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:41.171487  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:41.184703  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:41.184736  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:41.265356  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:41.265392  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:41.265409  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:41.345939  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:41.345979  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:41.388819  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:41.388852  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:43.940388  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:43.955448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:43.955515  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:43.998457  301425 cri.go:89] found id: ""
	I0729 13:40:43.998494  301425 logs.go:276] 0 containers: []
	W0729 13:40:43.998506  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:43.998515  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:43.998584  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:44.038142  301425 cri.go:89] found id: ""
	I0729 13:40:44.038173  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.038185  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:44.038193  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:44.038260  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:44.077270  301425 cri.go:89] found id: ""
	I0729 13:40:44.077302  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.077313  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:44.077321  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:44.077391  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:44.117612  301425 cri.go:89] found id: ""
	I0729 13:40:44.117641  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.117661  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:44.117681  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:44.117749  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:44.152564  301425 cri.go:89] found id: ""
	I0729 13:40:44.152603  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.152615  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:44.152623  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:44.152683  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:44.188245  301425 cri.go:89] found id: ""
	I0729 13:40:44.188276  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.188288  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:44.188296  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:44.188355  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:44.224947  301425 cri.go:89] found id: ""
	I0729 13:40:44.224975  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.224983  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:44.224989  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:44.225037  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:44.264830  301425 cri.go:89] found id: ""
	I0729 13:40:44.264860  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.264867  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:44.264877  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:44.264893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:44.343145  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:44.343182  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:44.384619  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:44.384650  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:44.438195  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:44.438237  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:44.452115  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:44.452152  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:44.526586  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:43.666987  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.167143  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.825141  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.324972  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.827065  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.325488  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:47.027726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:47.041174  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:47.041242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:47.079265  301425 cri.go:89] found id: ""
	I0729 13:40:47.079295  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.079304  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:47.079313  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:47.079380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:47.119775  301425 cri.go:89] found id: ""
	I0729 13:40:47.119807  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.119820  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:47.119828  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:47.119904  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:47.155381  301425 cri.go:89] found id: ""
	I0729 13:40:47.155415  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.155426  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:47.155434  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:47.155490  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:47.195071  301425 cri.go:89] found id: ""
	I0729 13:40:47.195103  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.195111  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:47.195117  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:47.195167  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:47.229487  301425 cri.go:89] found id: ""
	I0729 13:40:47.229519  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.229531  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:47.229539  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:47.229611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:47.266159  301425 cri.go:89] found id: ""
	I0729 13:40:47.266190  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.266201  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:47.266209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:47.266269  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:47.300813  301425 cri.go:89] found id: ""
	I0729 13:40:47.300845  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.300854  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:47.300860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:47.300916  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:47.340378  301425 cri.go:89] found id: ""
	I0729 13:40:47.340412  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.340432  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:47.340444  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:47.340464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:47.395403  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:47.395444  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:47.409505  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:47.409539  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:47.481327  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:47.481349  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:47.481365  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:47.560129  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:47.560172  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.105832  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:50.121192  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:50.121264  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:50.160217  301425 cri.go:89] found id: ""
	I0729 13:40:50.160247  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.160256  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:50.160262  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:50.160313  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:50.199952  301425 cri.go:89] found id: ""
	I0729 13:40:50.199986  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.199998  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:50.200005  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:50.200065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:50.240036  301425 cri.go:89] found id: ""
	I0729 13:40:50.240069  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.240076  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:50.240083  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:50.240134  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:50.279761  301425 cri.go:89] found id: ""
	I0729 13:40:50.279788  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.279796  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:50.279802  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:50.279852  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:50.320324  301425 cri.go:89] found id: ""
	I0729 13:40:50.320350  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.320358  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:50.320364  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:50.320423  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:50.356385  301425 cri.go:89] found id: ""
	I0729 13:40:50.356413  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.356421  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:50.356427  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:50.356482  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:50.396866  301425 cri.go:89] found id: ""
	I0729 13:40:50.396900  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.396912  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:50.396919  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:50.397008  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:50.434778  301425 cri.go:89] found id: ""
	I0729 13:40:50.434812  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.434823  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:50.434836  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:50.434853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:50.447746  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:50.447776  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:50.523750  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:50.523772  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:50.523787  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:50.604206  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:50.604255  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.647414  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:50.647449  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:48.666463  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.666670  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.823595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.824045  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.826836  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:51.326943  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.327715  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.201653  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:53.215745  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:53.215814  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:53.250482  301425 cri.go:89] found id: ""
	I0729 13:40:53.250508  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.250516  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:53.250522  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:53.250583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:53.285956  301425 cri.go:89] found id: ""
	I0729 13:40:53.285988  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.285996  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:53.286002  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:53.286055  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:53.320248  301425 cri.go:89] found id: ""
	I0729 13:40:53.320281  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.320292  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:53.320300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:53.320364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:53.355155  301425 cri.go:89] found id: ""
	I0729 13:40:53.355188  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.355200  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:53.355209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:53.355271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:53.389519  301425 cri.go:89] found id: ""
	I0729 13:40:53.389549  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.389557  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:53.389564  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:53.389620  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:53.424391  301425 cri.go:89] found id: ""
	I0729 13:40:53.424419  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.424427  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:53.424433  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:53.424492  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:53.463297  301425 cri.go:89] found id: ""
	I0729 13:40:53.463331  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.463342  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:53.463350  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:53.463433  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:53.497565  301425 cri.go:89] found id: ""
	I0729 13:40:53.497593  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.497601  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:53.497610  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:53.497622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:53.548906  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:53.548948  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:53.562789  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:53.562823  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:53.635656  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:53.635679  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:53.635693  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:53.715973  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:53.716024  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:53.166007  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.166420  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.324486  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.824480  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.825127  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.326505  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:56.258726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:56.273826  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:56.273905  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:56.310881  301425 cri.go:89] found id: ""
	I0729 13:40:56.310927  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.310936  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:56.310944  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:56.310999  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:56.350104  301425 cri.go:89] found id: ""
	I0729 13:40:56.350139  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.350151  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:56.350158  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:56.350221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:56.385100  301425 cri.go:89] found id: ""
	I0729 13:40:56.385136  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.385145  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:56.385151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:56.385234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:56.421904  301425 cri.go:89] found id: ""
	I0729 13:40:56.421941  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.421953  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:56.421961  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:56.422025  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:56.457366  301425 cri.go:89] found id: ""
	I0729 13:40:56.457403  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.457414  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:56.457422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:56.457491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:56.496700  301425 cri.go:89] found id: ""
	I0729 13:40:56.496732  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.496746  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:56.496755  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:56.496844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:56.532011  301425 cri.go:89] found id: ""
	I0729 13:40:56.532039  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.532047  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:56.532053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:56.532102  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:56.567511  301425 cri.go:89] found id: ""
	I0729 13:40:56.567543  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.567554  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:56.567566  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:56.567581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:56.615875  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:56.615914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:56.629818  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:56.629862  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:56.703255  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:56.703284  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:56.703298  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:56.786466  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:56.786508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:59.328670  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:59.342993  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:59.343061  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:59.378267  301425 cri.go:89] found id: ""
	I0729 13:40:59.378301  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.378313  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:59.378321  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:59.378392  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:59.415637  301425 cri.go:89] found id: ""
	I0729 13:40:59.415669  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.415680  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:59.415687  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:59.415759  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:59.451170  301425 cri.go:89] found id: ""
	I0729 13:40:59.451204  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.451212  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:59.451219  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:59.451275  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:59.485914  301425 cri.go:89] found id: ""
	I0729 13:40:59.485948  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.485960  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:59.485975  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:59.486052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:59.523168  301425 cri.go:89] found id: ""
	I0729 13:40:59.523198  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.523208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:59.523216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:59.523274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:59.557711  301425 cri.go:89] found id: ""
	I0729 13:40:59.557746  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.557758  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:59.557766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:59.557826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:59.593387  301425 cri.go:89] found id: ""
	I0729 13:40:59.593421  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.593434  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:59.593442  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:59.593506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:59.627521  301425 cri.go:89] found id: ""
	I0729 13:40:59.627555  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.627566  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:59.627578  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:59.627597  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:59.677497  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:59.677538  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:59.692116  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:59.692150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:59.759344  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:59.759369  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:59.759382  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:59.840380  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:59.840423  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:57.166964  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:59.666395  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:01.667229  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.323708  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.323995  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.325049  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.328293  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.826414  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.380718  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:02.394436  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:02.394497  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:02.433283  301425 cri.go:89] found id: ""
	I0729 13:41:02.433313  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.433323  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:02.433332  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:02.433393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:02.467206  301425 cri.go:89] found id: ""
	I0729 13:41:02.467232  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.467241  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:02.467247  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:02.467300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:02.502743  301425 cri.go:89] found id: ""
	I0729 13:41:02.502774  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.502783  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:02.502790  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:02.502844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:02.536415  301425 cri.go:89] found id: ""
	I0729 13:41:02.536449  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.536462  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:02.536470  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:02.536527  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:02.570572  301425 cri.go:89] found id: ""
	I0729 13:41:02.570610  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.570621  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:02.570629  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:02.570702  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:02.606251  301425 cri.go:89] found id: ""
	I0729 13:41:02.606277  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.606285  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:02.606292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:02.606345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:02.644637  301425 cri.go:89] found id: ""
	I0729 13:41:02.644664  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.644675  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:02.644683  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:02.644750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:02.679493  301425 cri.go:89] found id: ""
	I0729 13:41:02.679519  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.679527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:02.679537  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:02.679553  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:02.734865  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:02.734896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:02.787929  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:02.787962  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:02.801317  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:02.801344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:02.867838  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:02.867862  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:02.867877  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:05.451323  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:05.465262  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:05.465338  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:05.499797  301425 cri.go:89] found id: ""
	I0729 13:41:05.499827  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.499837  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:05.499845  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:05.499912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:05.534363  301425 cri.go:89] found id: ""
	I0729 13:41:05.534403  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.534416  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:05.534424  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:05.534483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:05.571366  301425 cri.go:89] found id: ""
	I0729 13:41:05.571397  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.571408  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:05.571416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:05.571481  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:05.611301  301425 cri.go:89] found id: ""
	I0729 13:41:05.611335  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.611346  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:05.611355  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:05.611422  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:05.650698  301425 cri.go:89] found id: ""
	I0729 13:41:05.650738  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.650750  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:05.650758  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:05.650823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:05.686166  301425 cri.go:89] found id: ""
	I0729 13:41:05.686204  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.686216  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:05.686225  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:05.686279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:05.724567  301425 cri.go:89] found id: ""
	I0729 13:41:05.724604  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.724616  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:05.724628  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:05.724691  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:05.760401  301425 cri.go:89] found id: ""
	I0729 13:41:05.760430  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.760438  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:05.760448  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:05.760464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:05.811654  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:05.811698  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:05.827189  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:05.827226  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:05.899612  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:05.899636  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:05.899654  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:04.168533  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.665694  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:04.325443  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.824244  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.325499  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:07.326413  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.982384  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:05.982425  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.527609  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:08.542024  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:08.542086  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:08.576313  301425 cri.go:89] found id: ""
	I0729 13:41:08.576340  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.576348  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:08.576354  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:08.576406  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:08.609996  301425 cri.go:89] found id: ""
	I0729 13:41:08.610027  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.610038  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:08.610045  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:08.610111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:08.643722  301425 cri.go:89] found id: ""
	I0729 13:41:08.643750  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.643758  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:08.643765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:08.643815  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:08.679331  301425 cri.go:89] found id: ""
	I0729 13:41:08.679367  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.679378  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:08.679388  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:08.679459  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:08.718348  301425 cri.go:89] found id: ""
	I0729 13:41:08.718376  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.718384  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:08.718390  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:08.718444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:08.758086  301425 cri.go:89] found id: ""
	I0729 13:41:08.758128  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.758140  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:08.758150  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:08.758225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:08.794304  301425 cri.go:89] found id: ""
	I0729 13:41:08.794333  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.794345  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:08.794354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:08.794415  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:08.835448  301425 cri.go:89] found id: ""
	I0729 13:41:08.835477  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.835486  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:08.835495  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:08.835508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:08.923886  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:08.923931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.963921  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:08.963957  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:09.013852  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:09.013893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:09.027838  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:09.027872  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:09.097864  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:08.669271  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.165979  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:08.824724  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:10.825582  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:09.327071  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.826906  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.598762  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:11.612789  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:11.612903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:11.650029  301425 cri.go:89] found id: ""
	I0729 13:41:11.650063  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.650074  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:11.650084  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:11.650152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:11.687479  301425 cri.go:89] found id: ""
	I0729 13:41:11.687510  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.687520  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:11.687527  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:11.687593  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:11.723788  301425 cri.go:89] found id: ""
	I0729 13:41:11.723816  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.723824  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:11.723830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:11.723878  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:11.760304  301425 cri.go:89] found id: ""
	I0729 13:41:11.760341  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.760353  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:11.760361  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:11.760429  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:11.794175  301425 cri.go:89] found id: ""
	I0729 13:41:11.794202  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.794210  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:11.794216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:11.794276  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:11.830653  301425 cri.go:89] found id: ""
	I0729 13:41:11.830679  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.830689  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:11.830697  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:11.830755  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:11.869360  301425 cri.go:89] found id: ""
	I0729 13:41:11.869391  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.869403  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:11.869410  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:11.869473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:11.904164  301425 cri.go:89] found id: ""
	I0729 13:41:11.904195  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.904206  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:11.904218  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:11.904236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:11.979031  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:11.979054  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:11.979069  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:12.064215  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:12.064254  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:12.101854  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:12.101896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:12.152327  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:12.152362  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:14.668032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:14.683118  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:14.683182  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:14.722574  301425 cri.go:89] found id: ""
	I0729 13:41:14.722602  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.722612  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:14.722619  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:14.722686  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:14.759047  301425 cri.go:89] found id: ""
	I0729 13:41:14.759084  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.759094  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:14.759099  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:14.759156  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:14.794363  301425 cri.go:89] found id: ""
	I0729 13:41:14.794400  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.794411  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:14.794418  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:14.794488  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:14.831542  301425 cri.go:89] found id: ""
	I0729 13:41:14.831579  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.831586  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:14.831592  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:14.831650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:14.878710  301425 cri.go:89] found id: ""
	I0729 13:41:14.878745  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.878758  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:14.878765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:14.878824  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:14.937804  301425 cri.go:89] found id: ""
	I0729 13:41:14.937837  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.937847  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:14.937856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:14.937923  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:14.985616  301425 cri.go:89] found id: ""
	I0729 13:41:14.985649  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.985658  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:14.985665  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:14.985737  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:15.023210  301425 cri.go:89] found id: ""
	I0729 13:41:15.023248  301425 logs.go:276] 0 containers: []
	W0729 13:41:15.023261  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:15.023273  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:15.023288  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:15.072549  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:15.072587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:15.086624  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:15.086653  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:15.155391  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:15.155412  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:15.155426  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:15.237480  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:15.237535  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:13.666473  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.666831  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:13.324177  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.324419  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:14.326023  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:16.826314  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.779568  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:17.794163  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:17.794225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:17.831416  301425 cri.go:89] found id: ""
	I0729 13:41:17.831446  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.831456  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:17.831463  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:17.831519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:17.868713  301425 cri.go:89] found id: ""
	I0729 13:41:17.868740  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.868752  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:17.868758  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:17.868834  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:17.913159  301425 cri.go:89] found id: ""
	I0729 13:41:17.913200  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.913211  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:17.913221  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:17.913291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:17.947528  301425 cri.go:89] found id: ""
	I0729 13:41:17.947559  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.947567  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:17.947573  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:17.947693  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:17.982280  301425 cri.go:89] found id: ""
	I0729 13:41:17.982314  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.982323  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:17.982330  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:17.982407  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:18.023729  301425 cri.go:89] found id: ""
	I0729 13:41:18.023767  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.023776  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:18.023783  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:18.023847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:18.061594  301425 cri.go:89] found id: ""
	I0729 13:41:18.061629  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.061637  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:18.061642  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:18.061694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:18.095705  301425 cri.go:89] found id: ""
	I0729 13:41:18.095735  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.095745  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:18.095758  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:18.095778  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:18.175843  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:18.175879  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:18.222979  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:18.223015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:18.277265  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:18.277308  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:18.291002  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:18.291037  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:18.373425  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:20.873958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:20.888091  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:20.888153  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:20.925850  301425 cri.go:89] found id: ""
	I0729 13:41:20.925886  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.925894  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:20.925901  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:20.925955  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:20.962725  301425 cri.go:89] found id: ""
	I0729 13:41:20.962762  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.962774  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:20.962782  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:20.962847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:18.166668  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.166993  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.827065  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.325697  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:19.325369  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:21.326574  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.998741  301425 cri.go:89] found id: ""
	I0729 13:41:20.998778  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.998787  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:20.998794  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:20.998842  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:21.036370  301425 cri.go:89] found id: ""
	I0729 13:41:21.036401  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.036410  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:21.036417  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:21.036483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:21.071560  301425 cri.go:89] found id: ""
	I0729 13:41:21.071588  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.071597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:21.071605  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:21.071670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:21.106778  301425 cri.go:89] found id: ""
	I0729 13:41:21.106810  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.106822  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:21.106830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:21.106890  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:21.139901  301425 cri.go:89] found id: ""
	I0729 13:41:21.139926  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.139934  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:21.139940  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:21.140001  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:21.173281  301425 cri.go:89] found id: ""
	I0729 13:41:21.173312  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.173320  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:21.173330  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:21.173344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:21.225055  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:21.225095  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:21.239780  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:21.239864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:21.313460  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:21.313486  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:21.313504  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:21.398557  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:21.398599  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:23.937873  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:23.951595  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:23.951653  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:23.987177  301425 cri.go:89] found id: ""
	I0729 13:41:23.987208  301425 logs.go:276] 0 containers: []
	W0729 13:41:23.987217  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:23.987225  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:23.987324  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:24.030197  301425 cri.go:89] found id: ""
	I0729 13:41:24.030251  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.030264  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:24.030272  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:24.030339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:24.068031  301425 cri.go:89] found id: ""
	I0729 13:41:24.068061  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.068074  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:24.068081  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:24.068154  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:24.107192  301425 cri.go:89] found id: ""
	I0729 13:41:24.107221  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.107232  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:24.107239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:24.107304  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:24.143154  301425 cri.go:89] found id: ""
	I0729 13:41:24.143182  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.143190  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:24.143196  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:24.143248  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:24.181268  301425 cri.go:89] found id: ""
	I0729 13:41:24.181296  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.181304  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:24.181311  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:24.181370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:24.215248  301425 cri.go:89] found id: ""
	I0729 13:41:24.215284  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.215293  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:24.215299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:24.215363  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:24.250796  301425 cri.go:89] found id: ""
	I0729 13:41:24.250822  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.250831  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:24.250841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:24.250853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:24.305841  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:24.305883  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:24.320182  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:24.320214  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:24.389667  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:24.389690  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:24.389707  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:24.471435  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:24.471479  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:22.665718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.166432  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:22.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:24.826598  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:26.828504  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:23.825754  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.834253  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:28.329733  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:27.014508  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:27.029318  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:27.029382  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:27.064115  301425 cri.go:89] found id: ""
	I0729 13:41:27.064150  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.064161  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:27.064169  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:27.064250  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:27.099081  301425 cri.go:89] found id: ""
	I0729 13:41:27.099110  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.099123  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:27.099131  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:27.099197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:27.132475  301425 cri.go:89] found id: ""
	I0729 13:41:27.132506  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.132518  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:27.132527  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:27.132595  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:27.168924  301425 cri.go:89] found id: ""
	I0729 13:41:27.168948  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.168956  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:27.168962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:27.169015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:27.204052  301425 cri.go:89] found id: ""
	I0729 13:41:27.204082  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.204094  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:27.204109  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:27.204170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:27.238355  301425 cri.go:89] found id: ""
	I0729 13:41:27.238383  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.238391  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:27.238397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:27.238496  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:27.276104  301425 cri.go:89] found id: ""
	I0729 13:41:27.276139  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.276150  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:27.276157  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:27.276222  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:27.308612  301425 cri.go:89] found id: ""
	I0729 13:41:27.308643  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.308654  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:27.308667  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:27.308683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:27.362472  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:27.362511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:27.376349  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:27.376383  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:27.458450  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:27.458472  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:27.458486  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:27.536405  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:27.536445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:30.076285  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:30.091308  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:30.091386  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:30.138335  301425 cri.go:89] found id: ""
	I0729 13:41:30.138369  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.138381  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:30.138389  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:30.138454  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:30.176395  301425 cri.go:89] found id: ""
	I0729 13:41:30.176425  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.176435  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:30.176443  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:30.176495  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:30.214990  301425 cri.go:89] found id: ""
	I0729 13:41:30.215027  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.215035  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:30.215041  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:30.215090  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:30.252051  301425 cri.go:89] found id: ""
	I0729 13:41:30.252080  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.252088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:30.252094  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:30.252155  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:30.287210  301425 cri.go:89] found id: ""
	I0729 13:41:30.287240  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.287249  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:30.287254  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:30.287337  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:30.322813  301425 cri.go:89] found id: ""
	I0729 13:41:30.322842  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.322851  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:30.322857  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:30.322924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:30.358697  301425 cri.go:89] found id: ""
	I0729 13:41:30.358730  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.358738  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:30.358744  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:30.358804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:30.394252  301425 cri.go:89] found id: ""
	I0729 13:41:30.394283  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.394294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:30.394305  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:30.394321  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:30.446777  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:30.446820  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:30.461564  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:30.461605  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:30.537918  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:30.537942  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:30.537958  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:30.613821  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:30.613865  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:27.167654  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.666133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.323396  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:31.324718  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:30.825879  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:32.826458  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.154081  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:33.168252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:33.168353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:33.205675  301425 cri.go:89] found id: ""
	I0729 13:41:33.205708  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.205719  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:33.205727  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:33.205799  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:33.240556  301425 cri.go:89] found id: ""
	I0729 13:41:33.240582  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.240590  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:33.240596  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:33.240644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:33.276662  301425 cri.go:89] found id: ""
	I0729 13:41:33.276690  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.276698  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:33.276704  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:33.276773  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:33.318631  301425 cri.go:89] found id: ""
	I0729 13:41:33.318667  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.318677  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:33.318685  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:33.318762  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:33.354372  301425 cri.go:89] found id: ""
	I0729 13:41:33.354403  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.354412  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:33.354421  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:33.354475  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:33.389309  301425 cri.go:89] found id: ""
	I0729 13:41:33.389337  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.389346  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:33.389352  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:33.389404  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:33.423689  301425 cri.go:89] found id: ""
	I0729 13:41:33.423732  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.423745  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:33.423753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:33.423823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:33.457556  301425 cri.go:89] found id: ""
	I0729 13:41:33.457593  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.457605  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:33.457618  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:33.457634  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:33.534377  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:33.534416  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:33.579646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:33.579689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:33.629784  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:33.629819  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:33.643878  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:33.643912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:33.716446  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:32.167152  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:34.666054  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.667479  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.823726  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.824199  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.324827  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.325672  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.216598  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:36.229904  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:36.230003  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:36.263721  301425 cri.go:89] found id: ""
	I0729 13:41:36.263752  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.263771  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:36.263786  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:36.263838  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:36.297900  301425 cri.go:89] found id: ""
	I0729 13:41:36.297932  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.297950  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:36.297958  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:36.298023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:36.338037  301425 cri.go:89] found id: ""
	I0729 13:41:36.338064  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.338072  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:36.338078  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:36.338125  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:36.375334  301425 cri.go:89] found id: ""
	I0729 13:41:36.375362  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.375370  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:36.375375  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:36.375426  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:36.410760  301425 cri.go:89] found id: ""
	I0729 13:41:36.410794  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.410805  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:36.410813  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:36.410888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:36.445247  301425 cri.go:89] found id: ""
	I0729 13:41:36.445280  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.445291  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:36.445300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:36.445364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:36.487183  301425 cri.go:89] found id: ""
	I0729 13:41:36.487214  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.487221  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:36.487228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:36.487301  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:36.522407  301425 cri.go:89] found id: ""
	I0729 13:41:36.522433  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.522442  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:36.522453  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:36.522468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:36.537163  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:36.537197  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:36.608334  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:36.608361  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:36.608376  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:36.689026  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:36.689074  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:36.728580  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:36.728618  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.279605  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:39.293259  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:39.293320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:39.329070  301425 cri.go:89] found id: ""
	I0729 13:41:39.329095  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.329103  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:39.329109  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:39.329160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:39.362992  301425 cri.go:89] found id: ""
	I0729 13:41:39.363023  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.363032  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:39.363038  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:39.363100  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:39.403094  301425 cri.go:89] found id: ""
	I0729 13:41:39.403128  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.403140  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:39.403147  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:39.403201  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:39.435761  301425 cri.go:89] found id: ""
	I0729 13:41:39.435795  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.435806  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:39.435814  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:39.435881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:39.468299  301425 cri.go:89] found id: ""
	I0729 13:41:39.468332  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.468341  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:39.468349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:39.468417  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:39.505114  301425 cri.go:89] found id: ""
	I0729 13:41:39.505149  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.505162  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:39.505172  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:39.505234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:39.536942  301425 cri.go:89] found id: ""
	I0729 13:41:39.536975  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.536986  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:39.536994  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:39.537064  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:39.577394  301425 cri.go:89] found id: ""
	I0729 13:41:39.577427  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.577439  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:39.577451  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:39.577468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.631143  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:39.631184  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:39.645020  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:39.645047  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:39.718256  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:39.718283  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:39.718297  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:39.801990  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:39.802036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:39.166762  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.167646  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.824966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.825836  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.324009  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.327169  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.826091  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.347066  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:42.359902  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:42.359983  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:42.395494  301425 cri.go:89] found id: ""
	I0729 13:41:42.395529  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.395540  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:42.395548  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:42.395611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:42.429305  301425 cri.go:89] found id: ""
	I0729 13:41:42.429334  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.429343  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:42.429350  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:42.429401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:42.466902  301425 cri.go:89] found id: ""
	I0729 13:41:42.466931  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.466942  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:42.466949  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:42.467017  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:42.504582  301425 cri.go:89] found id: ""
	I0729 13:41:42.504618  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.504628  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:42.504652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:42.504717  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:42.539649  301425 cri.go:89] found id: ""
	I0729 13:41:42.539676  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.539686  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:42.539695  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:42.539758  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:42.579209  301425 cri.go:89] found id: ""
	I0729 13:41:42.579238  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.579249  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:42.579257  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:42.579320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:42.614832  301425 cri.go:89] found id: ""
	I0729 13:41:42.614861  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.614869  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:42.614874  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:42.614925  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:42.651837  301425 cri.go:89] found id: ""
	I0729 13:41:42.651865  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.651873  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:42.651883  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:42.651899  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:42.707149  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:42.707190  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:42.720990  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:42.721043  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:42.789818  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:42.789849  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:42.789867  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:42.871880  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:42.871934  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.416172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:45.428923  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:45.428994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:45.466667  301425 cri.go:89] found id: ""
	I0729 13:41:45.466699  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.466710  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:45.466717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:45.466783  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:45.501779  301425 cri.go:89] found id: ""
	I0729 13:41:45.501813  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.501825  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:45.501832  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:45.501896  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:45.537507  301425 cri.go:89] found id: ""
	I0729 13:41:45.537537  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.537547  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:45.537554  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:45.537619  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:45.575430  301425 cri.go:89] found id: ""
	I0729 13:41:45.575460  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.575467  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:45.575474  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:45.575523  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:45.613009  301425 cri.go:89] found id: ""
	I0729 13:41:45.613038  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.613047  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:45.613053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:45.613103  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:45.650734  301425 cri.go:89] found id: ""
	I0729 13:41:45.650767  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.650778  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:45.650786  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:45.650853  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:45.684301  301425 cri.go:89] found id: ""
	I0729 13:41:45.684332  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.684341  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:45.684349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:45.684416  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:45.719861  301425 cri.go:89] found id: ""
	I0729 13:41:45.719901  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.719911  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:45.719921  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:45.719936  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:45.800422  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:45.800464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.842460  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:45.842493  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:45.897388  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:45.897430  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:45.911554  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:45.911587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:41:43.665771  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.666196  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:44.325813  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:46.824774  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:43.828518  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.830106  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:48.325196  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	W0729 13:41:45.984435  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.485014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:48.498038  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:48.498110  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:48.534248  301425 cri.go:89] found id: ""
	I0729 13:41:48.534280  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.534291  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:48.534299  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:48.534362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:48.572411  301425 cri.go:89] found id: ""
	I0729 13:41:48.572445  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.572457  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:48.572465  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:48.572524  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:48.612345  301425 cri.go:89] found id: ""
	I0729 13:41:48.612373  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.612381  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:48.612387  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:48.612450  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:48.650334  301425 cri.go:89] found id: ""
	I0729 13:41:48.650385  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.650395  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:48.650401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:48.650466  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:48.687460  301425 cri.go:89] found id: ""
	I0729 13:41:48.687490  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.687501  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:48.687508  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:48.687572  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:48.735028  301425 cri.go:89] found id: ""
	I0729 13:41:48.735064  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.735077  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:48.735085  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:48.735142  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:48.771175  301425 cri.go:89] found id: ""
	I0729 13:41:48.771209  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.771220  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:48.771228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:48.771300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:48.808267  301425 cri.go:89] found id: ""
	I0729 13:41:48.808295  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.808304  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:48.808314  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:48.808328  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:48.850520  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:48.850557  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:48.902563  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:48.902612  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:48.919082  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:48.919114  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:48.999185  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.999213  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:48.999241  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:48.166020  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:49.323402  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.326596  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.825399  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:52.831823  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.579922  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:51.593149  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:51.593213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:51.626302  301425 cri.go:89] found id: ""
	I0729 13:41:51.626330  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.626338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:51.626344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:51.626393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:51.659551  301425 cri.go:89] found id: ""
	I0729 13:41:51.659578  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.659586  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:51.659592  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:51.659642  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:51.696842  301425 cri.go:89] found id: ""
	I0729 13:41:51.696868  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.696876  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:51.696882  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:51.696937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:51.737209  301425 cri.go:89] found id: ""
	I0729 13:41:51.737237  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.737246  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:51.737253  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:51.737317  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:51.772782  301425 cri.go:89] found id: ""
	I0729 13:41:51.772829  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.772842  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:51.772850  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:51.772921  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:51.806649  301425 cri.go:89] found id: ""
	I0729 13:41:51.806679  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.806690  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:51.806698  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:51.806771  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:51.848950  301425 cri.go:89] found id: ""
	I0729 13:41:51.848978  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.848989  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:51.848997  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:51.849065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:51.884875  301425 cri.go:89] found id: ""
	I0729 13:41:51.884902  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.884910  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:51.884920  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:51.884932  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:51.964282  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:51.964322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:52.004218  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:52.004251  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:52.056230  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:52.056266  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.069591  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:52.069622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:52.142552  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:54.643154  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:54.657199  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:54.657259  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:54.694124  301425 cri.go:89] found id: ""
	I0729 13:41:54.694152  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.694159  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:54.694165  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:54.694221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:54.732072  301425 cri.go:89] found id: ""
	I0729 13:41:54.732109  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.732119  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:54.732127  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:54.732194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:54.768257  301425 cri.go:89] found id: ""
	I0729 13:41:54.768294  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.768306  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:54.768314  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:54.768383  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:54.807596  301425 cri.go:89] found id: ""
	I0729 13:41:54.807631  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.807643  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:54.807651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:54.807716  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:54.845107  301425 cri.go:89] found id: ""
	I0729 13:41:54.845134  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.845142  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:54.845148  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:54.845197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:54.880627  301425 cri.go:89] found id: ""
	I0729 13:41:54.880655  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.880667  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:54.880675  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:54.880750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:54.918122  301425 cri.go:89] found id: ""
	I0729 13:41:54.918151  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.918159  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:54.918165  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:54.918219  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:54.956943  301425 cri.go:89] found id: ""
	I0729 13:41:54.956986  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.956999  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:54.957022  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:54.957036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:55.032512  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:55.032547  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:55.032564  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:55.116653  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:55.116699  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:55.177030  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:55.177059  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:55.238789  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:55.238831  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.166339  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:54.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:53.824694  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:56.324761  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:55.324698  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.326135  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.753504  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:57.766354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:57.766436  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:57.802691  301425 cri.go:89] found id: ""
	I0729 13:41:57.802728  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.802740  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:57.802746  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:57.802807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:57.839800  301425 cri.go:89] found id: ""
	I0729 13:41:57.839823  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.839830  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:57.839846  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:57.839902  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:57.881592  301425 cri.go:89] found id: ""
	I0729 13:41:57.881617  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.881625  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:57.881631  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:57.881681  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.916245  301425 cri.go:89] found id: ""
	I0729 13:41:57.916273  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.916282  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:57.916290  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:57.916346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:57.952224  301425 cri.go:89] found id: ""
	I0729 13:41:57.952261  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.952272  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:57.952280  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:57.952340  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:57.985508  301425 cri.go:89] found id: ""
	I0729 13:41:57.985537  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.985548  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:57.985557  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:57.985624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:58.022354  301425 cri.go:89] found id: ""
	I0729 13:41:58.022382  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.022391  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:58.022397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:58.022462  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:58.055865  301425 cri.go:89] found id: ""
	I0729 13:41:58.055891  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.055900  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:58.055914  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:58.055931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:58.069143  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:58.069177  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:58.143137  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:58.143164  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:58.143183  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:58.224631  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:58.224672  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:58.266437  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:58.266470  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:00.819300  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:00.834195  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:00.834258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:00.869660  301425 cri.go:89] found id: ""
	I0729 13:42:00.869697  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.869709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:00.869717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:00.869777  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:00.915601  301425 cri.go:89] found id: ""
	I0729 13:42:00.915630  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.915638  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:00.915644  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:00.915694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:00.956981  301425 cri.go:89] found id: ""
	I0729 13:42:00.957020  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.957028  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:00.957034  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:00.957094  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.166038  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.666455  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.666824  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:58.824729  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.825513  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.825074  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.826480  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.995761  301425 cri.go:89] found id: ""
	I0729 13:42:00.995793  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.995801  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:00.995817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:00.995869  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:01.047668  301425 cri.go:89] found id: ""
	I0729 13:42:01.047699  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.047707  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:01.047713  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:01.047787  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:01.085178  301425 cri.go:89] found id: ""
	I0729 13:42:01.085209  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.085217  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:01.085224  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:01.085278  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:01.125282  301425 cri.go:89] found id: ""
	I0729 13:42:01.125310  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.125320  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:01.125329  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:01.125396  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:01.165972  301425 cri.go:89] found id: ""
	I0729 13:42:01.166005  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.166021  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:01.166033  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:01.166049  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:01.236500  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:01.236523  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:01.236540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:01.320918  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:01.320959  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:01.366975  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:01.367015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:01.420347  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:01.420389  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:03.936048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:03.949603  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:03.949679  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:03.987529  301425 cri.go:89] found id: ""
	I0729 13:42:03.987557  301425 logs.go:276] 0 containers: []
	W0729 13:42:03.987567  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:03.987574  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:03.987639  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:04.027325  301425 cri.go:89] found id: ""
	I0729 13:42:04.027355  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.027365  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:04.027372  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:04.027437  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:04.063019  301425 cri.go:89] found id: ""
	I0729 13:42:04.063050  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.063059  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:04.063065  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:04.063117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:04.101106  301425 cri.go:89] found id: ""
	I0729 13:42:04.101135  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.101146  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:04.101153  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:04.101242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:04.137186  301425 cri.go:89] found id: ""
	I0729 13:42:04.137219  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.137230  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:04.137238  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:04.137302  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:04.175732  301425 cri.go:89] found id: ""
	I0729 13:42:04.175761  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.175770  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:04.175776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:04.175826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:04.213265  301425 cri.go:89] found id: ""
	I0729 13:42:04.213296  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.213307  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:04.213315  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:04.213381  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:04.248581  301425 cri.go:89] found id: ""
	I0729 13:42:04.248609  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.248617  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:04.248627  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:04.248643  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:04.303277  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:04.303400  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:04.317518  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:04.317547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:04.385209  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:04.385229  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:04.385242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:04.470629  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:04.470680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:04.167299  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.168006  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.324087  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:05.324904  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.826588  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.325326  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:08.326125  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.012455  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:07.028535  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:07.028621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:07.063453  301425 cri.go:89] found id: ""
	I0729 13:42:07.063496  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.063505  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:07.063511  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:07.063582  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:07.098243  301425 cri.go:89] found id: ""
	I0729 13:42:07.098274  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.098284  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:07.098291  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:07.098357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:07.138122  301425 cri.go:89] found id: ""
	I0729 13:42:07.138149  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.138157  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:07.138162  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:07.138213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:07.176772  301425 cri.go:89] found id: ""
	I0729 13:42:07.176814  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.176826  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:07.176835  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:07.176894  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:07.214867  301425 cri.go:89] found id: ""
	I0729 13:42:07.214898  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.214914  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:07.214920  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:07.214979  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:07.253443  301425 cri.go:89] found id: ""
	I0729 13:42:07.253471  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.253481  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:07.253490  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:07.253550  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:07.287284  301425 cri.go:89] found id: ""
	I0729 13:42:07.287326  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.287338  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:07.287349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:07.287411  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:07.330550  301425 cri.go:89] found id: ""
	I0729 13:42:07.330577  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.330588  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:07.330599  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:07.330620  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:07.384226  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:07.384268  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:07.398790  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:07.398817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:07.462868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:07.462893  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:07.462914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:07.538665  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:07.538706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.078452  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:10.091962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:10.092027  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:10.127401  301425 cri.go:89] found id: ""
	I0729 13:42:10.127434  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.127445  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:10.127454  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:10.127531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:10.161088  301425 cri.go:89] found id: ""
	I0729 13:42:10.161117  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.161127  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:10.161134  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:10.161187  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:10.199721  301425 cri.go:89] found id: ""
	I0729 13:42:10.199751  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.199763  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:10.199769  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:10.199821  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:10.237067  301425 cri.go:89] found id: ""
	I0729 13:42:10.237106  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.237120  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:10.237127  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:10.237191  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:10.275863  301425 cri.go:89] found id: ""
	I0729 13:42:10.275894  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.275909  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:10.275918  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:10.275981  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:10.313234  301425 cri.go:89] found id: ""
	I0729 13:42:10.313262  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.313270  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:10.313276  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:10.313334  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:10.353530  301425 cri.go:89] found id: ""
	I0729 13:42:10.353558  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.353569  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:10.353576  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:10.353644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:10.389488  301425 cri.go:89] found id: ""
	I0729 13:42:10.389516  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.389527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:10.389539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:10.389562  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.428705  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:10.428740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:10.484413  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:10.484456  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:10.499203  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:10.499248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:10.570868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:10.570894  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:10.570907  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:08.667158  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:11.166721  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.825638  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.324753  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.326752  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:12.826001  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:13.151788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:13.165297  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:13.165367  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:13.203752  301425 cri.go:89] found id: ""
	I0729 13:42:13.203786  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.203798  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:13.203805  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:13.203874  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:13.240454  301425 cri.go:89] found id: ""
	I0729 13:42:13.240491  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.240499  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:13.240504  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:13.240556  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:13.276508  301425 cri.go:89] found id: ""
	I0729 13:42:13.276536  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.276545  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:13.276553  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:13.276617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:13.311252  301425 cri.go:89] found id: ""
	I0729 13:42:13.311280  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.311291  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:13.311299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:13.311353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:13.351777  301425 cri.go:89] found id: ""
	I0729 13:42:13.351808  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.351817  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:13.351823  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:13.351881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:13.389020  301425 cri.go:89] found id: ""
	I0729 13:42:13.389049  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.389058  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:13.389064  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:13.389126  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:13.424353  301425 cri.go:89] found id: ""
	I0729 13:42:13.424387  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.424395  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:13.424401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:13.424451  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:13.460755  301425 cri.go:89] found id: ""
	I0729 13:42:13.460788  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.460817  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:13.460830  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:13.460850  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:13.500201  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:13.500234  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:13.553319  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:13.553357  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:13.567496  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:13.567529  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:13.644662  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:13.644686  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:13.644700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:13.667287  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.160289  301044 pod_ready.go:81] duration metric: took 4m0.000442608s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:16.160321  301044 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 13:42:16.160342  301044 pod_ready.go:38] duration metric: took 4m7.984743222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:16.160378  301044 kubeadm.go:597] duration metric: took 4m16.091281244s to restartPrimaryControlPlane
	W0729 13:42:16.160459  301044 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:16.160486  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:12.825387  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.826853  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.827679  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.829149  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326337  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326370  300746 pod_ready.go:81] duration metric: took 4m0.007721109s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:17.326383  300746 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:42:17.326392  300746 pod_ready.go:38] duration metric: took 4m8.417741792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:17.326410  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:42:17.326446  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:17.326514  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:17.373993  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.374027  300746 cri.go:89] found id: ""
	I0729 13:42:17.374037  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:17.374118  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.384841  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:17.384929  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:17.422219  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.422253  300746 cri.go:89] found id: ""
	I0729 13:42:17.422263  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:17.422349  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.427319  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:17.427385  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:17.469310  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:17.469336  300746 cri.go:89] found id: ""
	I0729 13:42:17.469347  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:17.469412  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.474501  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:17.474590  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:17.520767  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:17.520808  300746 cri.go:89] found id: ""
	I0729 13:42:17.520818  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:17.520881  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.525543  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:17.525643  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:17.572718  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.572749  300746 cri.go:89] found id: ""
	I0729 13:42:17.572758  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:17.572839  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.577227  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:17.577304  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:17.614076  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.614098  300746 cri.go:89] found id: ""
	I0729 13:42:17.614106  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:17.614153  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.618404  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:17.618479  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:17.666242  300746 cri.go:89] found id: ""
	I0729 13:42:17.666275  300746 logs.go:276] 0 containers: []
	W0729 13:42:17.666285  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:17.666301  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:17.666373  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:17.713379  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:17.713411  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:17.713418  300746 cri.go:89] found id: ""
	I0729 13:42:17.713428  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:17.713493  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.719026  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.723948  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:17.723974  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:17.743561  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:17.743607  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.803393  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:17.803425  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.855689  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:17.855723  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.898327  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:17.898361  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.951024  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:17.951060  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:18.014040  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:18.014082  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:18.159937  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:18.159984  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:18.201626  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:18.201667  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:18.247168  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:18.247211  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:18.291431  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:18.291469  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:18.333636  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:18.333671  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.226602  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:16.242934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:16.243005  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:16.284033  301425 cri.go:89] found id: ""
	I0729 13:42:16.284064  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.284075  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:16.284083  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:16.284152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:16.328362  301425 cri.go:89] found id: ""
	I0729 13:42:16.328388  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.328396  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:16.328402  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:16.328464  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:16.372664  301425 cri.go:89] found id: ""
	I0729 13:42:16.372701  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.372712  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:16.372727  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:16.372818  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:16.416085  301425 cri.go:89] found id: ""
	I0729 13:42:16.416119  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.416130  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:16.416138  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:16.416194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:16.457786  301425 cri.go:89] found id: ""
	I0729 13:42:16.457819  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.457830  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:16.457838  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:16.457903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:16.498929  301425 cri.go:89] found id: ""
	I0729 13:42:16.498962  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.498971  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:16.498979  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:16.499043  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:16.546159  301425 cri.go:89] found id: ""
	I0729 13:42:16.546187  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.546199  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:16.546207  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:16.546270  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:16.585010  301425 cri.go:89] found id: ""
	I0729 13:42:16.585041  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.585052  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:16.585065  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:16.585081  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:16.639033  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:16.639079  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:16.656209  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:16.656242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:16.734835  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:16.734863  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:16.734940  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.818756  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:16.818798  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.370796  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:19.384267  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:19.384354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:19.425595  301425 cri.go:89] found id: ""
	I0729 13:42:19.425629  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.425641  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:19.425650  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:19.425715  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:19.461470  301425 cri.go:89] found id: ""
	I0729 13:42:19.461506  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.461517  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:19.461524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:19.461592  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:19.508232  301425 cri.go:89] found id: ""
	I0729 13:42:19.508265  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.508275  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:19.508283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:19.508360  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:19.546226  301425 cri.go:89] found id: ""
	I0729 13:42:19.546259  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.546275  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:19.546283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:19.546354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:19.581125  301425 cri.go:89] found id: ""
	I0729 13:42:19.581156  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.581167  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:19.581176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:19.581242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:19.619680  301425 cri.go:89] found id: ""
	I0729 13:42:19.619719  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.619728  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:19.619736  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:19.619800  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:19.657096  301425 cri.go:89] found id: ""
	I0729 13:42:19.657126  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.657136  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:19.657142  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:19.657203  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:19.697247  301425 cri.go:89] found id: ""
	I0729 13:42:19.697277  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.697286  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:19.697297  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:19.697312  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:19.714900  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:19.714935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:19.794118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:19.794145  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:19.794161  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:19.907077  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:19.907122  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.949841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:19.949871  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:19.324474  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:21.826117  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:18.858720  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:18.858773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:21.419344  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:21.440121  300746 api_server.go:72] duration metric: took 4m17.790553991s to wait for apiserver process to appear ...
	I0729 13:42:21.440149  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:42:21.440190  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:21.440242  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:21.485874  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:21.485897  300746 cri.go:89] found id: ""
	I0729 13:42:21.485905  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:21.485956  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.490424  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:21.490493  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:21.532174  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:21.532202  300746 cri.go:89] found id: ""
	I0729 13:42:21.532211  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:21.532259  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.536561  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:21.536622  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:21.579375  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:21.579397  300746 cri.go:89] found id: ""
	I0729 13:42:21.579404  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:21.579450  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.584710  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:21.584779  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:21.621437  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.621465  300746 cri.go:89] found id: ""
	I0729 13:42:21.621475  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:21.621536  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.625829  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:21.625898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:21.666063  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:21.666086  300746 cri.go:89] found id: ""
	I0729 13:42:21.666095  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:21.666162  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.670822  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:21.670898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:21.713993  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:21.714022  300746 cri.go:89] found id: ""
	I0729 13:42:21.714032  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:21.714099  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.718967  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:21.719044  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:21.761282  300746 cri.go:89] found id: ""
	I0729 13:42:21.761312  300746 logs.go:276] 0 containers: []
	W0729 13:42:21.761320  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:21.761327  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:21.761390  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:21.810085  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:21.810114  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:21.810121  300746 cri.go:89] found id: ""
	I0729 13:42:21.810130  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:21.810185  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.814713  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.819968  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:21.819996  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:21.834798  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:21.834823  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:21.957963  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:21.958000  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.995345  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:21.995376  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:22.037737  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:22.037773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:22.074774  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:22.074813  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:22.123172  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.123205  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.181432  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:22.181473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:22.237128  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:22.237162  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:22.285733  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:22.285766  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:22.328258  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:22.328291  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:22.381239  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.381276  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:22.840466  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:22.840504  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:22.515296  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:22.529187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:22.529286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:22.573033  301425 cri.go:89] found id: ""
	I0729 13:42:22.573070  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.573082  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:22.573091  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:22.573152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:22.608443  301425 cri.go:89] found id: ""
	I0729 13:42:22.608476  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.608489  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:22.608496  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:22.608566  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:22.641672  301425 cri.go:89] found id: ""
	I0729 13:42:22.641704  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.641716  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:22.641724  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:22.641781  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:22.673902  301425 cri.go:89] found id: ""
	I0729 13:42:22.673934  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.673944  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:22.673952  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:22.674012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:22.715131  301425 cri.go:89] found id: ""
	I0729 13:42:22.715165  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.715179  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:22.715187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:22.715251  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:22.748807  301425 cri.go:89] found id: ""
	I0729 13:42:22.748838  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.748848  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:22.748856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:22.748924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:22.781972  301425 cri.go:89] found id: ""
	I0729 13:42:22.782002  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.782012  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:22.782021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:22.782088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:22.815791  301425 cri.go:89] found id: ""
	I0729 13:42:22.815823  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.815834  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:22.815848  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.815864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.873595  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:22.873631  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:22.888081  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:22.888123  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:22.959873  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:22.959899  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.959912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:23.040996  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:23.041035  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:25.585159  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:25.604154  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.604240  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.645428  301425 cri.go:89] found id: ""
	I0729 13:42:25.645459  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.645466  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:25.645474  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.645534  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.682758  301425 cri.go:89] found id: ""
	I0729 13:42:25.682785  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.682793  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:25.682799  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.682864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.724297  301425 cri.go:89] found id: ""
	I0729 13:42:25.724330  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.724341  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:25.724349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.724401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.761124  301425 cri.go:89] found id: ""
	I0729 13:42:25.761157  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.761168  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:25.761177  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.761229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.802698  301425 cri.go:89] found id: ""
	I0729 13:42:25.802728  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.802741  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:25.802750  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.802804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.840472  301425 cri.go:89] found id: ""
	I0729 13:42:25.840499  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.840509  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:25.840516  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.840586  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.875217  301425 cri.go:89] found id: ""
	I0729 13:42:25.875255  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.875267  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.875273  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:25.875345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:25.919895  301425 cri.go:89] found id: ""
	I0729 13:42:25.919937  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.919948  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:25.919963  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.919988  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:24.324138  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:26.324843  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:25.399606  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:42:25.405339  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:42:25.406585  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:42:25.406607  300746 api_server.go:131] duration metric: took 3.966451518s to wait for apiserver health ...
	I0729 13:42:25.406615  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:42:25.406640  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.406686  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.442039  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:25.442068  300746 cri.go:89] found id: ""
	I0729 13:42:25.442079  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:25.442140  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.446769  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.446830  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.482122  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:25.482144  300746 cri.go:89] found id: ""
	I0729 13:42:25.482156  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:25.482211  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.486666  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.486729  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.534553  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:25.534584  300746 cri.go:89] found id: ""
	I0729 13:42:25.534595  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:25.534657  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.539546  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.539624  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.577538  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.577562  300746 cri.go:89] found id: ""
	I0729 13:42:25.577572  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:25.577635  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.582377  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.582457  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.628918  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:25.628945  300746 cri.go:89] found id: ""
	I0729 13:42:25.628955  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:25.629027  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.633502  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.633592  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.673133  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.673156  300746 cri.go:89] found id: ""
	I0729 13:42:25.673163  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:25.673210  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.677905  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.677994  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.724757  300746 cri.go:89] found id: ""
	I0729 13:42:25.724780  300746 logs.go:276] 0 containers: []
	W0729 13:42:25.724805  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.724813  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:25.724887  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:25.775101  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.775130  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:25.775136  300746 cri.go:89] found id: ""
	I0729 13:42:25.775144  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:25.775219  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.782008  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.787032  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:25.787064  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.834985  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:25.835026  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.897295  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:25.897338  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.938020  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.938053  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:26.002775  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:26.002808  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:26.021431  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:26.021473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:26.071861  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:26.071898  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:26.130018  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:26.130057  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:26.170233  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:26.170290  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:26.207687  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.207718  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.600518  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:26.600575  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:26.707024  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:26.707074  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:26.753205  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.753240  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:29.302597  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:42:29.302626  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.302630  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.302634  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.302638  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.302641  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.302644  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.302649  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.302654  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.302661  300746 system_pods.go:74] duration metric: took 3.896040202s to wait for pod list to return data ...
	I0729 13:42:29.302670  300746 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:42:29.305640  300746 default_sa.go:45] found service account: "default"
	I0729 13:42:29.305668  300746 default_sa.go:55] duration metric: took 2.989028ms for default service account to be created ...
	I0729 13:42:29.305679  300746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:42:29.310472  300746 system_pods.go:86] 8 kube-system pods found
	I0729 13:42:29.310495  300746 system_pods.go:89] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.310500  300746 system_pods.go:89] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.310505  300746 system_pods.go:89] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.310509  300746 system_pods.go:89] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.310513  300746 system_pods.go:89] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.310517  300746 system_pods.go:89] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.310523  300746 system_pods.go:89] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.310528  300746 system_pods.go:89] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.310536  300746 system_pods.go:126] duration metric: took 4.851477ms to wait for k8s-apps to be running ...
	I0729 13:42:29.310545  300746 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:42:29.310580  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.329123  300746 system_svc.go:56] duration metric: took 18.569258ms WaitForService to wait for kubelet
	I0729 13:42:29.329155  300746 kubeadm.go:582] duration metric: took 4m25.679589837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:42:29.329182  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:42:29.332696  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:42:29.332726  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:42:29.332741  300746 node_conditions.go:105] duration metric: took 3.551684ms to run NodePressure ...
	I0729 13:42:29.332756  300746 start.go:241] waiting for startup goroutines ...
	I0729 13:42:29.332770  300746 start.go:246] waiting for cluster config update ...
	I0729 13:42:29.332784  300746 start.go:255] writing updated cluster config ...
	I0729 13:42:29.333168  300746 ssh_runner.go:195] Run: rm -f paused
	I0729 13:42:29.394738  300746 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 13:42:29.396826  300746 out.go:177] * Done! kubectl is now configured to use "no-preload-566777" cluster and "default" namespace by default
	I0729 13:42:25.981964  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:25.982005  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:25.997546  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:25.997576  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:26.075879  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:26.075901  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.075917  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.158552  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.158593  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:28.704328  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:28.718946  301425 kubeadm.go:597] duration metric: took 4m3.546660825s to restartPrimaryControlPlane
	W0729 13:42:28.719041  301425 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:28.719086  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:29.251866  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.267009  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:29.277498  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:29.287980  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:29.288003  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:29.288054  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:42:29.297830  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:29.297890  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:29.308263  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:42:29.318332  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:29.318388  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:29.328684  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.339841  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:29.339894  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.351304  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:42:29.363901  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:29.363960  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:29.377255  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:29.453113  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:42:29.453212  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:29.609835  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:29.609970  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:29.610106  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:29.812529  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:29.814455  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:29.814551  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:29.814633  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:29.814727  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:29.814799  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:29.814915  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:29.814979  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:29.815695  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:29.816098  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:29.816602  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:29.817114  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:29.817184  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:29.817266  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:30.122967  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:30.287162  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:30.336346  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:30.516317  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:30.532829  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:30.533732  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:30.533809  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:30.672345  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:30.674334  301425 out.go:204]   - Booting up control plane ...
	I0729 13:42:30.674492  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:30.681661  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:30.681784  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:30.683350  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:30.687290  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:42:28.327998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:30.823998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:32.824105  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:34.825475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:37.324435  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:39.824490  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:42.323305  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:44.329376  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:46.823645  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:47.980926  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.820407091s)
	I0729 13:42:47.981010  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:47.997344  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:48.007813  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:48.017519  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:48.017538  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:48.017579  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:42:48.028739  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:48.028819  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:48.038417  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:42:48.047864  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:48.047921  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:48.057408  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.066977  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:48.067040  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.077017  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:42:48.087204  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:48.087267  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:48.097659  301044 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:48.149712  301044 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:42:48.149883  301044 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:48.277280  301044 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:48.277441  301044 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:48.277578  301044 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:48.505523  301044 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:48.507718  301044 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:48.507827  301044 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:48.507941  301044 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:48.508049  301044 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:48.508139  301044 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:48.508245  301044 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:48.508334  301044 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:48.508431  301044 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:48.508518  301044 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:48.508622  301044 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:48.508740  301044 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:48.508824  301044 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:48.508949  301044 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:48.545220  301044 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:48.620528  301044 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:42:48.781015  301044 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:49.039301  301044 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:49.104540  301044 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:49.105022  301044 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:49.107524  301044 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:49.109579  301044 out.go:204]   - Booting up control plane ...
	I0729 13:42:49.109698  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:49.109836  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:49.109924  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:49.129789  301044 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:49.130766  301044 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:49.130844  301044 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:49.272901  301044 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:42:49.273017  301044 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:42:50.274804  301044 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001903151s
	I0729 13:42:50.274906  301044 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:42:48.825621  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:51.324025  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.276427  301044 kubeadm.go:310] [api-check] The API server is healthy after 5.001280529s
	I0729 13:42:55.289666  301044 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:42:55.309747  301044 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:42:55.343304  301044 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:42:55.343537  301044 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-972693 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:42:55.366319  301044 kubeadm.go:310] [bootstrap-token] Using token: bvsox4.ktqddck1jfi3aduz
	I0729 13:42:55.367592  301044 out.go:204]   - Configuring RBAC rules ...
	I0729 13:42:55.367695  301044 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:42:55.380118  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:42:55.393704  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:42:55.397859  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:42:55.401567  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:42:55.407851  301044 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:42:55.684714  301044 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:42:56.128597  301044 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:42:56.683879  301044 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:42:56.685050  301044 kubeadm.go:310] 
	I0729 13:42:56.685127  301044 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:42:56.685137  301044 kubeadm.go:310] 
	I0729 13:42:56.685216  301044 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:42:56.685226  301044 kubeadm.go:310] 
	I0729 13:42:56.685252  301044 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:42:56.685335  301044 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:42:56.685414  301044 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:42:56.685422  301044 kubeadm.go:310] 
	I0729 13:42:56.685527  301044 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:42:56.685550  301044 kubeadm.go:310] 
	I0729 13:42:56.685607  301044 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:42:56.685617  301044 kubeadm.go:310] 
	I0729 13:42:56.685684  301044 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:42:56.685800  301044 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:42:56.685916  301044 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:42:56.685933  301044 kubeadm.go:310] 
	I0729 13:42:56.686048  301044 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:42:56.686149  301044 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:42:56.686162  301044 kubeadm.go:310] 
	I0729 13:42:56.686277  301044 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686416  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 \
	I0729 13:42:56.686449  301044 kubeadm.go:310] 	--control-plane 
	I0729 13:42:56.686462  301044 kubeadm.go:310] 
	I0729 13:42:56.686562  301044 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:42:56.686571  301044 kubeadm.go:310] 
	I0729 13:42:56.686687  301044 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686839  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 
	I0729 13:42:56.687046  301044 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:42:56.687123  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:42:56.687140  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:42:56.689013  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:42:53.324453  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.326475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:56.690282  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:42:56.703026  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:42:56.722677  301044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-972693 minikube.k8s.io/updated_at=2024_07_29T13_42_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=default-k8s-diff-port-972693 minikube.k8s.io/primary=true
	I0729 13:42:56.738921  301044 ops.go:34] apiserver oom_adj: -16
	I0729 13:42:56.902369  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.402842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.902902  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.403358  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.903112  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.402540  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.902605  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.402440  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.903011  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:01.403295  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.823966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:00.323772  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:01.818493  300705 pod_ready.go:81] duration metric: took 4m0.000972043s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:43:01.818528  300705 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:43:01.818537  300705 pod_ready.go:38] duration metric: took 4m4.037818748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:01.818555  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:01.818589  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:01.818643  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:01.874334  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:01.874359  300705 cri.go:89] found id: ""
	I0729 13:43:01.874369  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:01.874439  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.879122  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:01.879214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:01.919779  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:01.919804  300705 cri.go:89] found id: ""
	I0729 13:43:01.919814  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:01.919874  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.924895  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:01.924963  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:01.970365  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:01.970386  300705 cri.go:89] found id: ""
	I0729 13:43:01.970394  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:01.970444  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.975331  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:01.975409  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:02.013029  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.013062  300705 cri.go:89] found id: ""
	I0729 13:43:02.013074  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:02.013136  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.017958  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:02.018019  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:02.062357  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.062385  300705 cri.go:89] found id: ""
	I0729 13:43:02.062394  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:02.062463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.066791  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:02.066841  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:02.103790  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:02.103812  300705 cri.go:89] found id: ""
	I0729 13:43:02.103821  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:02.103882  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.108242  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:02.108293  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:02.151089  300705 cri.go:89] found id: ""
	I0729 13:43:02.151122  300705 logs.go:276] 0 containers: []
	W0729 13:43:02.151133  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:02.151141  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:02.151204  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:02.205700  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:02.205727  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.205732  300705 cri.go:89] found id: ""
	I0729 13:43:02.205741  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:02.205790  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.210332  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.214889  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:02.214913  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:02.229589  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:02.229621  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:02.278361  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:02.278394  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:02.319117  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:02.319146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.357874  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:02.357908  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.402114  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:02.402146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.442480  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:02.442514  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:01.903256  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.403400  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.902925  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.402616  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.903161  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.403255  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.902489  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.402506  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.902530  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:06.402436  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.953914  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:02.953961  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:03.013404  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:03.013441  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:03.151261  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:03.151294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:03.199910  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:03.199964  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:03.257103  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:03.257137  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:03.308519  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:03.308559  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:05.857929  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:05.878306  300705 api_server.go:72] duration metric: took 4m15.820258046s to wait for apiserver process to appear ...
	I0729 13:43:05.878338  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:05.878383  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:05.878451  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:05.924031  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:05.924071  300705 cri.go:89] found id: ""
	I0729 13:43:05.924083  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:05.924151  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.929284  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:05.929363  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:05.968980  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:05.969003  300705 cri.go:89] found id: ""
	I0729 13:43:05.969010  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:05.969056  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.973451  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:05.973516  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:06.011760  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.011784  300705 cri.go:89] found id: ""
	I0729 13:43:06.011794  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:06.011857  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.016065  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:06.016132  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:06.066319  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.066345  300705 cri.go:89] found id: ""
	I0729 13:43:06.066353  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:06.066420  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.071060  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:06.071120  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:06.117383  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.117405  300705 cri.go:89] found id: ""
	I0729 13:43:06.117413  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:06.117463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.121968  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:06.122053  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:06.156125  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.156151  300705 cri.go:89] found id: ""
	I0729 13:43:06.156160  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:06.156209  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.160301  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:06.160366  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:06.206751  300705 cri.go:89] found id: ""
	I0729 13:43:06.206780  300705 logs.go:276] 0 containers: []
	W0729 13:43:06.206790  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:06.206798  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:06.206860  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:06.248884  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.248918  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:06.248925  300705 cri.go:89] found id: ""
	I0729 13:43:06.248936  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:06.249006  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.253087  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.257229  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:06.257252  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.291495  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:06.291528  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.330190  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:06.330219  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.366500  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:06.366536  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.424871  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:06.424906  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:06.855025  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:06.855069  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:06.870025  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:06.870055  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:06.986590  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:06.986630  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:07.036972  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:07.037007  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:07.092602  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:07.092646  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:07.135326  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:07.135366  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:07.190208  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:07.190247  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:07.241865  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:07.241896  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.902842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.402861  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.903148  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.402619  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.902869  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.403349  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.903277  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.402468  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.535843  301044 kubeadm.go:1113] duration metric: took 13.813154738s to wait for elevateKubeSystemPrivileges
	I0729 13:43:10.535879  301044 kubeadm.go:394] duration metric: took 5m10.527995876s to StartCluster
	I0729 13:43:10.535899  301044 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.535991  301044 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:43:10.538845  301044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.539141  301044 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:43:10.539343  301044 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:43:10.539513  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:43:10.539528  301044 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539556  301044 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539574  301044 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-972693"
	I0729 13:43:10.539587  301044 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-972693"
	I0729 13:43:10.539600  301044 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539623  301044 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.539635  301044 addons.go:243] addon metrics-server should already be in state true
	I0729 13:43:10.539692  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	W0729 13:43:10.539594  301044 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:43:10.539817  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.540342  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540368  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540380  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540399  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540664  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540814  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.542249  301044 out.go:177] * Verifying Kubernetes components...
	I0729 13:43:10.543974  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:43:10.561555  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0729 13:43:10.561585  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I0729 13:43:10.561820  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0729 13:43:10.562096  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562160  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562579  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562694  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562711  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.562750  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562766  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563224  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563236  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563496  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.563516  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563793  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563923  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.563959  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563982  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.564526  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.564781  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.569041  301044 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.569062  301044 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:43:10.569091  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.569443  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.569462  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.580340  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I0729 13:43:10.580852  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.581371  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.581384  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.581724  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.581911  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.583937  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I0729 13:43:10.584108  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.584422  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.584864  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.584881  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.585262  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.585445  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.586285  301044 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:43:10.586973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.587855  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:43:10.587873  301044 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:43:10.587907  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.588885  301044 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:43:10.689091  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:43:10.689558  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:10.689837  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:10.590240  301044 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.590258  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:43:10.590275  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.592026  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42695
	I0729 13:43:10.592306  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.592778  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.592859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.592877  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.593162  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.593295  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.593382  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.593455  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.593663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594055  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.594082  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594233  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.594388  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.594485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.594621  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.594882  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.594892  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.595227  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.595663  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.595680  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.611094  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0729 13:43:10.611617  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.612200  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.612224  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.612600  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.612973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.614541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.614743  301044 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:10.614757  301044 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:43:10.614774  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.617611  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.618064  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.618416  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.618595  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.618754  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.791924  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:43:10.850744  301044 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866102  301044 node_ready.go:49] node "default-k8s-diff-port-972693" has status "Ready":"True"
	I0729 13:43:10.866137  301044 node_ready.go:38] duration metric: took 15.35404ms for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866171  301044 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:10.877661  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:10.958120  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.981335  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:43:10.981363  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:43:10.982804  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:11.145078  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:43:11.145108  301044 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:43:11.236628  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:11.236658  301044 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:43:11.308646  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.315025489s)
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290345752s)
	I0729 13:43:12.273254  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273270  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273283  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273296  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273572  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273589  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273598  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273606  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273704  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273721  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273731  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273739  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.275558  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275601  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275616  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.275624  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275634  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275644  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.309442  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.309473  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.309839  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.309888  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.309909  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.464546  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.155855113s)
	I0729 13:43:12.464601  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.464614  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465037  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465060  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465071  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.465081  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465398  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.465418  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465476  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465494  301044 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-972693"
	I0729 13:43:12.467315  301044 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:43:09.811571  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:43:09.817221  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:43:09.818319  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:09.818342  300705 api_server.go:131] duration metric: took 3.939996032s to wait for apiserver health ...
	I0729 13:43:09.818350  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:09.818373  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:09.818425  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:09.861856  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:09.861883  300705 cri.go:89] found id: ""
	I0729 13:43:09.861894  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:09.861962  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.867142  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:09.867216  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:09.909767  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:09.909795  300705 cri.go:89] found id: ""
	I0729 13:43:09.909808  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:09.909877  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.914410  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:09.914482  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:09.953540  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:09.953568  300705 cri.go:89] found id: ""
	I0729 13:43:09.953578  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:09.953637  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.958140  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:09.958214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:09.999809  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:09.999836  300705 cri.go:89] found id: ""
	I0729 13:43:09.999846  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:09.999911  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.004505  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:10.004587  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:10.049146  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.049173  300705 cri.go:89] found id: ""
	I0729 13:43:10.049182  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:10.049252  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.053631  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:10.053698  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:10.090361  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.090386  300705 cri.go:89] found id: ""
	I0729 13:43:10.090396  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:10.090442  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.095528  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:10.095588  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:10.131892  300705 cri.go:89] found id: ""
	I0729 13:43:10.131925  300705 logs.go:276] 0 containers: []
	W0729 13:43:10.131937  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:10.131944  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:10.132008  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:10.169101  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.169127  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.169133  300705 cri.go:89] found id: ""
	I0729 13:43:10.169142  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:10.169203  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.174716  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.179196  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:10.179217  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:10.222803  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:10.222833  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:10.265944  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:10.265975  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.310266  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:10.310294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.370562  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:10.370611  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.415759  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:10.415803  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:10.467672  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:10.467702  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:10.531249  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:10.531293  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:10.550454  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:10.550485  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:10.709028  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:10.709068  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:10.761048  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:10.761093  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:10.813125  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:10.813169  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.852581  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:10.852608  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:13.725236  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:43:13.725272  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.725279  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.725284  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.725289  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.725293  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.725298  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.725306  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.725312  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.725322  300705 system_pods.go:74] duration metric: took 3.906966083s to wait for pod list to return data ...
	I0729 13:43:13.725335  300705 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:13.727954  300705 default_sa.go:45] found service account: "default"
	I0729 13:43:13.727984  300705 default_sa.go:55] duration metric: took 2.638639ms for default service account to be created ...
	I0729 13:43:13.728032  300705 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:13.733141  300705 system_pods.go:86] 8 kube-system pods found
	I0729 13:43:13.733163  300705 system_pods.go:89] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.733169  300705 system_pods.go:89] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.733173  300705 system_pods.go:89] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.733177  300705 system_pods.go:89] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.733181  300705 system_pods.go:89] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.733185  300705 system_pods.go:89] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.733191  300705 system_pods.go:89] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.733196  300705 system_pods.go:89] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.733205  300705 system_pods.go:126] duration metric: took 5.16021ms to wait for k8s-apps to be running ...
	I0729 13:43:13.733213  300705 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:13.733255  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:13.755011  300705 system_svc.go:56] duration metric: took 21.784065ms WaitForService to wait for kubelet
	I0729 13:43:13.755042  300705 kubeadm.go:582] duration metric: took 4m23.697000108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:13.755068  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:13.758549  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:13.758572  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:13.758586  300705 node_conditions.go:105] duration metric: took 3.512205ms to run NodePressure ...
	I0729 13:43:13.758601  300705 start.go:241] waiting for startup goroutines ...
	I0729 13:43:13.758612  300705 start.go:246] waiting for cluster config update ...
	I0729 13:43:13.758625  300705 start.go:255] writing updated cluster config ...
	I0729 13:43:13.758945  300705 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:13.810333  300705 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:13.812397  300705 out.go:177] * Done! kubectl is now configured to use "embed-certs-135920" cluster and "default" namespace by default
	I0729 13:43:12.468541  301044 addons.go:510] duration metric: took 1.929219306s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:43:12.887280  301044 pod_ready.go:102] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:13.386255  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.386279  301044 pod_ready.go:81] duration metric: took 2.508586907s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.386291  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391278  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.391302  301044 pod_ready.go:81] duration metric: took 5.00403ms for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391313  301044 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396324  301044 pod_ready.go:92] pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.396343  301044 pod_ready.go:81] duration metric: took 5.022707ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396350  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403008  301044 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.403026  301044 pod_ready.go:81] duration metric: took 6.670677ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403035  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407836  301044 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.407856  301044 pod_ready.go:81] duration metric: took 4.814401ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407868  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783140  301044 pod_ready.go:92] pod "kube-proxy-tfsk9" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.783168  301044 pod_ready.go:81] duration metric: took 375.291599ms for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783181  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182560  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:14.182588  301044 pod_ready.go:81] duration metric: took 399.399691ms for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182597  301044 pod_ready.go:38] duration metric: took 3.316409576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:14.182610  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:14.182661  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:14.210715  301044 api_server.go:72] duration metric: took 3.671529553s to wait for apiserver process to appear ...
	I0729 13:43:14.210749  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:14.210790  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:43:14.214886  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:43:14.215773  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:14.215795  301044 api_server.go:131] duration metric: took 5.0389ms to wait for apiserver health ...
	I0729 13:43:14.215802  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:14.386356  301044 system_pods.go:59] 9 kube-system pods found
	I0729 13:43:14.386389  301044 system_pods.go:61] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.386394  301044 system_pods.go:61] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.386398  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.386401  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.386405  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.386409  301044 system_pods.go:61] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.386412  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.386417  301044 system_pods.go:61] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.386420  301044 system_pods.go:61] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.386430  301044 system_pods.go:74] duration metric: took 170.622271ms to wait for pod list to return data ...
	I0729 13:43:14.386437  301044 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:14.582618  301044 default_sa.go:45] found service account: "default"
	I0729 13:43:14.582643  301044 default_sa.go:55] duration metric: took 196.19918ms for default service account to be created ...
	I0729 13:43:14.582652  301044 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:14.785669  301044 system_pods.go:86] 9 kube-system pods found
	I0729 13:43:14.785701  301044 system_pods.go:89] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.785707  301044 system_pods.go:89] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.785711  301044 system_pods.go:89] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.785719  301044 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.785723  301044 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.785727  301044 system_pods.go:89] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.785731  301044 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.785737  301044 system_pods.go:89] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.785741  301044 system_pods.go:89] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.785750  301044 system_pods.go:126] duration metric: took 203.092668ms to wait for k8s-apps to be running ...
	I0729 13:43:14.785756  301044 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:14.785801  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:14.802927  301044 system_svc.go:56] duration metric: took 17.160927ms WaitForService to wait for kubelet
	I0729 13:43:14.802957  301044 kubeadm.go:582] duration metric: took 4.263780375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:14.802977  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:14.983106  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:14.983135  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:14.983146  301044 node_conditions.go:105] duration metric: took 180.164781ms to run NodePressure ...
	I0729 13:43:14.983159  301044 start.go:241] waiting for startup goroutines ...
	I0729 13:43:14.983165  301044 start.go:246] waiting for cluster config update ...
	I0729 13:43:14.983175  301044 start.go:255] writing updated cluster config ...
	I0729 13:43:14.983443  301044 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:15.038438  301044 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:15.040318  301044 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-972693" cluster and "default" namespace by default
	I0729 13:43:15.690809  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:15.691011  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:25.691962  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:25.692244  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:45.693269  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:45.693473  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696107  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:44:25.696300  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696307  301425 kubeadm.go:310] 
	I0729 13:44:25.696341  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:44:25.696400  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:44:25.696419  301425 kubeadm.go:310] 
	I0729 13:44:25.696463  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:44:25.696510  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:44:25.696653  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:44:25.696674  301425 kubeadm.go:310] 
	I0729 13:44:25.696818  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:44:25.696868  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:44:25.696921  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:44:25.696930  301425 kubeadm.go:310] 
	I0729 13:44:25.697076  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:44:25.697192  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:44:25.697206  301425 kubeadm.go:310] 
	I0729 13:44:25.697349  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:44:25.697459  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:44:25.697568  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:44:25.697669  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:44:25.697680  301425 kubeadm.go:310] 
	I0729 13:44:25.698359  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:44:25.698490  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:44:25.698596  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 13:44:25.698771  301425 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 13:44:25.698848  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:44:26.160539  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:44:26.175482  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:44:26.185562  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:44:26.185593  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:44:26.185657  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:44:26.195781  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:44:26.195865  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:44:26.207404  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:44:26.217068  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:44:26.217188  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:44:26.226075  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.234622  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:44:26.234684  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.243756  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:44:26.252630  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:44:26.252695  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:44:26.262846  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:44:26.340215  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:44:26.340318  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:44:26.496049  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:44:26.496199  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:44:26.496327  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:44:26.678135  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:44:26.680089  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:44:26.680173  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:44:26.680257  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:44:26.680378  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:44:26.680470  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:44:26.680570  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:44:26.680653  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:44:26.680751  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:44:26.681022  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:44:26.681519  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:44:26.681876  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:44:26.681994  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:44:26.682083  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:44:26.762680  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:44:26.922517  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:44:26.973731  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:44:27.193064  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:44:27.216477  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:44:27.219036  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:44:27.219293  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:44:27.386424  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:44:27.388194  301425 out.go:204]   - Booting up control plane ...
	I0729 13:44:27.388340  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:44:27.390345  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:44:27.391455  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:44:27.392303  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:44:27.394301  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:45:07.396989  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:45:07.397449  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:07.397719  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:12.397982  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:12.398297  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:22.398751  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:22.399010  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:42.399462  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:42.399675  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398413  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:46:22.398684  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398700  301425 kubeadm.go:310] 
	I0729 13:46:22.398763  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:46:22.398844  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:46:22.398886  301425 kubeadm.go:310] 
	I0729 13:46:22.398948  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:46:22.399002  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:46:22.399132  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:46:22.399145  301425 kubeadm.go:310] 
	I0729 13:46:22.399287  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:46:22.399346  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:46:22.399392  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:46:22.399404  301425 kubeadm.go:310] 
	I0729 13:46:22.399530  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:46:22.399610  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:46:22.399617  301425 kubeadm.go:310] 
	I0729 13:46:22.399735  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:46:22.399844  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:46:22.399943  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:46:22.400021  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:46:22.400035  301425 kubeadm.go:310] 
	I0729 13:46:22.400291  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:46:22.400370  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:46:22.400440  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 13:46:22.400520  301425 kubeadm.go:394] duration metric: took 7m57.286753846s to StartCluster
	I0729 13:46:22.400612  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:46:22.400692  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:46:22.446188  301425 cri.go:89] found id: ""
	I0729 13:46:22.446216  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.446225  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:46:22.446232  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:46:22.446289  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:46:22.484089  301425 cri.go:89] found id: ""
	I0729 13:46:22.484118  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.484128  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:46:22.484135  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:46:22.484197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:46:22.526817  301425 cri.go:89] found id: ""
	I0729 13:46:22.526846  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.526854  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:46:22.526860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:46:22.526912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:46:22.564787  301425 cri.go:89] found id: ""
	I0729 13:46:22.564834  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.564846  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:46:22.564854  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:46:22.564920  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:46:22.601843  301425 cri.go:89] found id: ""
	I0729 13:46:22.601881  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.601892  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:46:22.601900  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:46:22.601980  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:46:22.637420  301425 cri.go:89] found id: ""
	I0729 13:46:22.637448  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.637455  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:46:22.637462  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:46:22.637519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:46:22.672427  301425 cri.go:89] found id: ""
	I0729 13:46:22.672465  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.672476  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:46:22.672485  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:46:22.672549  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:46:22.708256  301425 cri.go:89] found id: ""
	I0729 13:46:22.708285  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.708294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:46:22.708306  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:46:22.708323  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:46:22.819287  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:46:22.819327  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:46:22.859298  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:46:22.859339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:46:22.914290  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:46:22.914342  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:46:22.936919  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:46:22.936951  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:46:23.035889  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0729 13:46:23.035939  301425 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 13:46:23.035991  301425 out.go:239] * 
	W0729 13:46:23.036103  301425 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.036137  301425 out.go:239] * 
	W0729 13:46:23.037370  301425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:46:23.040573  301425 out.go:177] 
	W0729 13:46:23.042130  301425 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.042173  301425 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 13:46:23.042193  301425 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 13:46:23.043539  301425 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.340599448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261328340572840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad3fe5da-47cf-4617-b07d-28847c47f97c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.341095681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45a2fa21-27e2-480d-b2d5-e1f2cc224d0e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.341164635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45a2fa21-27e2-480d-b2d5-e1f2cc224d0e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.341196476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=45a2fa21-27e2-480d-b2d5-e1f2cc224d0e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.375945851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7856442d-bf19-4e9a-8963-516f53f4a284 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.376043451Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7856442d-bf19-4e9a-8963-516f53f4a284 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.377439600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70ec173d-5810-4f43-aa31-a8d2cb131cfc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.377904892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261328377875562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70ec173d-5810-4f43-aa31-a8d2cb131cfc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.378371227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd098f00-3bdd-49c3-9d0f-6d5ebb5794f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.378442692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd098f00-3bdd-49c3-9d0f-6d5ebb5794f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.378479315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cd098f00-3bdd-49c3-9d0f-6d5ebb5794f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.410096135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff8129b3-9fa5-45c0-8771-019885919975 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.410186564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff8129b3-9fa5-45c0-8771-019885919975 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.411658728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3437ba74-6108-4aad-b8b4-be092a3b1616 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.412131884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261328412111690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3437ba74-6108-4aad-b8b4-be092a3b1616 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.412640354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f58b458d-a0f5-4da8-bc4d-3520db961c6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.412724964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f58b458d-a0f5-4da8-bc4d-3520db961c6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.412861218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f58b458d-a0f5-4da8-bc4d-3520db961c6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.444862803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08670fee-21c2-4def-9b93-1c64060142ab name=/runtime.v1.RuntimeService/Version
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.444990318Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08670fee-21c2-4def-9b93-1c64060142ab name=/runtime.v1.RuntimeService/Version
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.446113520Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5453d58b-eee4-451f-a8d7-8f85497b2b70 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.446489451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261328446465658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5453d58b-eee4-451f-a8d7-8f85497b2b70 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.447056258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb57be6b-8b67-491e-9b58-3156739fd0e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.447108855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb57be6b-8b67-491e-9b58-3156739fd0e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:55:28 old-k8s-version-924039 crio[651]: time="2024-07-29 13:55:28.447143706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fb57be6b-8b67-491e-9b58-3156739fd0e9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 13:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050569] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048582] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul29 13:38] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.901895] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.671429] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000011] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.092860] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.061256] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065965] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.189582] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.150988] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.251542] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.656340] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.075950] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.028528] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +9.845046] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 13:42] systemd-fstab-generator[5014]: Ignoring "noauto" option for root device
	[Jul29 13:44] systemd-fstab-generator[5301]: Ignoring "noauto" option for root device
	[  +0.070564] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:55:28 up 17 min,  0 users,  load average: 0.08, 0.10, 0.04
	Linux old-k8s-version-924039 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009c0c0, 0xc00054ba70)
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]: goroutine 156 [select]:
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00096fef0, 0x4f0ac20, 0xc000894230, 0x1, 0xc00009c0c0)
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001460e0, 0xc00009c0c0)
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008a2240, 0xc000407880)
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 29 13:55:22 old-k8s-version-924039 kubelet[6465]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 29 13:55:22 old-k8s-version-924039 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 13:55:22 old-k8s-version-924039 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 13:55:23 old-k8s-version-924039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 29 13:55:23 old-k8s-version-924039 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 13:55:23 old-k8s-version-924039 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 13:55:23 old-k8s-version-924039 kubelet[6474]: I0729 13:55:23.700187    6474 server.go:416] Version: v1.20.0
	Jul 29 13:55:23 old-k8s-version-924039 kubelet[6474]: I0729 13:55:23.700439    6474 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 13:55:23 old-k8s-version-924039 kubelet[6474]: I0729 13:55:23.702163    6474 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 13:55:23 old-k8s-version-924039 kubelet[6474]: W0729 13:55:23.702941    6474 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 13:55:23 old-k8s-version-924039 kubelet[6474]: I0729 13:55:23.703524    6474 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-924039 -n old-k8s-version-924039
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 2 (234.089081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-924039" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (384.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-566777 -n no-preload-566777
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 13:57:56.811540474 +0000 UTC m=+6955.074895373
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-566777 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-566777 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.498µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-566777 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-566777 -n no-preload-566777
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-566777 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-566777 logs -n 25: (1.295384847s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo find                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo crio                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-507612                                       | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-312895 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | disable-driver-mounts-312895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:30 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-135920            | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-566777             | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-972693  | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-135920                 | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-566777                  | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-924039        | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-566777 --memory=2200                     | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-972693       | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC | 29 Jul 24 13:43 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-924039             | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:57 UTC | 29 Jul 24 13:57 UTC |
	| start   | -p newest-cni-615666 --memory=2200 --alsologtostderr   | newest-cni-615666            | jenkins | v1.33.1 | 29 Jul 24 13:57 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:57:52
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:57:52.651094  307651 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:57:52.651199  307651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:57:52.651209  307651 out.go:304] Setting ErrFile to fd 2...
	I0729 13:57:52.651216  307651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:57:52.651386  307651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:57:52.652080  307651 out.go:298] Setting JSON to false
	I0729 13:57:52.653116  307651 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13216,"bootTime":1722248257,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:57:52.653175  307651 start.go:139] virtualization: kvm guest
	I0729 13:57:52.655592  307651 out.go:177] * [newest-cni-615666] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:57:52.656944  307651 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:57:52.656959  307651 notify.go:220] Checking for updates...
	I0729 13:57:52.659847  307651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:57:52.661267  307651 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:57:52.662468  307651 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:57:52.663832  307651 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:57:52.665019  307651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:57:52.666584  307651 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:57:52.666670  307651 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:57:52.666770  307651 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:57:52.666875  307651 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:57:52.703045  307651 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:57:52.704371  307651 start.go:297] selected driver: kvm2
	I0729 13:57:52.704386  307651 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:57:52.704397  307651 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:57:52.705201  307651 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:57:52.705269  307651 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:57:52.720770  307651 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:57:52.720844  307651 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 13:57:52.720875  307651 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 13:57:52.721176  307651 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 13:57:52.721212  307651 cni.go:84] Creating CNI manager for ""
	I0729 13:57:52.721238  307651 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:57:52.721253  307651 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 13:57:52.721332  307651 start.go:340] cluster config:
	{Name:newest-cni-615666 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-615666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:57:52.721469  307651 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:57:52.723313  307651 out.go:177] * Starting "newest-cni-615666" primary control-plane node in "newest-cni-615666" cluster
	I0729 13:57:52.724624  307651 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 13:57:52.724655  307651 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:57:52.724665  307651 cache.go:56] Caching tarball of preloaded images
	I0729 13:57:52.724744  307651 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:57:52.724758  307651 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 13:57:52.724889  307651 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/config.json ...
	I0729 13:57:52.724916  307651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/config.json: {Name:mk5d51a59524b27e545a3123b6e789ee822fbdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:57:52.725056  307651 start.go:360] acquireMachinesLock for newest-cni-615666: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:57:52.725085  307651 start.go:364] duration metric: took 15.841µs to acquireMachinesLock for "newest-cni-615666"
	I0729 13:57:52.725109  307651 start.go:93] Provisioning new machine with config: &{Name:newest-cni-615666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-615666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:57:52.725174  307651 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.451653776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261477451624407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a02e65f-bcad-40a3-891e-e12e13acf847 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.452202643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f552795-5962-4025-b699-46be959b1556 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.452296405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f552795-5962-4025-b699-46be959b1556 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.452697703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260295916592638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25a1cef4fe62b959b63a0ea5ae0be4eed4725e01da6be6ed9dacb7746f95f58,PodSandboxId:e9a8ce40643081f6e59f9a61f7aff033a9be3f94aa76cd845223a2caa6fc48e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260289870153481,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 913d9c33-01b3-4966-bbfb-61a75f958c12,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e,PodSandboxId:7832ee370975d85e084f122eea8217b63855127b6b081fd616a2248e0ffae0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260286979051346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kkrqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1ab6ca-6006-450e-8bef-bf9136e5e575,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260280024593988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2,PodSandboxId:2deb18f3e0f2366e352621fe59598d9ba5d5a97c7fac5f61fe72c2220ce315a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722260279354446796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ql6wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ee6e47-c0f9-4c98-b294-3ee39b6278
84,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa,PodSandboxId:54a01ed813dbdb8b134b3e3b1ee549d6372ec3a9c7a3bae4bb92b7fa2ab228cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722260274694984594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb64324503455e84
4b1a6d605201625d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2,PodSandboxId:fafa613b78cb7bcf60fc41bf5938cb6e9a88e60b8eed1e4826aedb7a5c200694,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722260274618799766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c20f959dbbac974f49ab921fe8fe8
ecd,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e,PodSandboxId:73d98712f2ebca8b45b709f842cfb3d7c8ab64632387b153d245eef7d58c0e57,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722260274601969104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ba46991e39bfca6afa3f59eb02c317,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6,PodSandboxId:5f8674c0bd92cf295d8e1f6115e51d9e5fe7e4e961b82dbda1b957846c75ac68,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722260274564841493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c7239a3fdc31ee696d9e70cf015f9c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f552795-5962-4025-b699-46be959b1556 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.498773653Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd47234e-d6d5-4a22-92db-af38022eecf8 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.498901897Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd47234e-d6d5-4a22-92db-af38022eecf8 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.500583438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98dc04c7-a8bd-4120-a288-0f8b0569c9bc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.500929361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261477500906253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98dc04c7-a8bd-4120-a288-0f8b0569c9bc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.501903711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2adc2741-a34d-4435-b00a-5b0edb8c832c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.501959889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2adc2741-a34d-4435-b00a-5b0edb8c832c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.502200396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260295916592638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25a1cef4fe62b959b63a0ea5ae0be4eed4725e01da6be6ed9dacb7746f95f58,PodSandboxId:e9a8ce40643081f6e59f9a61f7aff033a9be3f94aa76cd845223a2caa6fc48e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260289870153481,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 913d9c33-01b3-4966-bbfb-61a75f958c12,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e,PodSandboxId:7832ee370975d85e084f122eea8217b63855127b6b081fd616a2248e0ffae0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260286979051346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kkrqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1ab6ca-6006-450e-8bef-bf9136e5e575,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260280024593988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2,PodSandboxId:2deb18f3e0f2366e352621fe59598d9ba5d5a97c7fac5f61fe72c2220ce315a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722260279354446796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ql6wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ee6e47-c0f9-4c98-b294-3ee39b6278
84,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa,PodSandboxId:54a01ed813dbdb8b134b3e3b1ee549d6372ec3a9c7a3bae4bb92b7fa2ab228cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722260274694984594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb64324503455e84
4b1a6d605201625d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2,PodSandboxId:fafa613b78cb7bcf60fc41bf5938cb6e9a88e60b8eed1e4826aedb7a5c200694,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722260274618799766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c20f959dbbac974f49ab921fe8fe8
ecd,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e,PodSandboxId:73d98712f2ebca8b45b709f842cfb3d7c8ab64632387b153d245eef7d58c0e57,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722260274601969104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ba46991e39bfca6afa3f59eb02c317,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6,PodSandboxId:5f8674c0bd92cf295d8e1f6115e51d9e5fe7e4e961b82dbda1b957846c75ac68,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722260274564841493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c7239a3fdc31ee696d9e70cf015f9c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2adc2741-a34d-4435-b00a-5b0edb8c832c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.553668671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0a5c7bc-e96a-425c-aa09-7e03e019d758 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.553788054Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0a5c7bc-e96a-425c-aa09-7e03e019d758 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.555179602Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82b9f716-c75a-4b5c-a311-a18b88b53118 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.555663980Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261477555640267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82b9f716-c75a-4b5c-a311-a18b88b53118 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.556245364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53e46525-0347-4381-a5d6-4481611b1290 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.556295961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53e46525-0347-4381-a5d6-4481611b1290 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.556950497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260295916592638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25a1cef4fe62b959b63a0ea5ae0be4eed4725e01da6be6ed9dacb7746f95f58,PodSandboxId:e9a8ce40643081f6e59f9a61f7aff033a9be3f94aa76cd845223a2caa6fc48e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260289870153481,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 913d9c33-01b3-4966-bbfb-61a75f958c12,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e,PodSandboxId:7832ee370975d85e084f122eea8217b63855127b6b081fd616a2248e0ffae0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260286979051346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kkrqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1ab6ca-6006-450e-8bef-bf9136e5e575,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260280024593988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2,PodSandboxId:2deb18f3e0f2366e352621fe59598d9ba5d5a97c7fac5f61fe72c2220ce315a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722260279354446796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ql6wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ee6e47-c0f9-4c98-b294-3ee39b6278
84,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa,PodSandboxId:54a01ed813dbdb8b134b3e3b1ee549d6372ec3a9c7a3bae4bb92b7fa2ab228cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722260274694984594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb64324503455e84
4b1a6d605201625d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2,PodSandboxId:fafa613b78cb7bcf60fc41bf5938cb6e9a88e60b8eed1e4826aedb7a5c200694,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722260274618799766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c20f959dbbac974f49ab921fe8fe8
ecd,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e,PodSandboxId:73d98712f2ebca8b45b709f842cfb3d7c8ab64632387b153d245eef7d58c0e57,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722260274601969104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ba46991e39bfca6afa3f59eb02c317,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6,PodSandboxId:5f8674c0bd92cf295d8e1f6115e51d9e5fe7e4e961b82dbda1b957846c75ac68,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722260274564841493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c7239a3fdc31ee696d9e70cf015f9c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53e46525-0347-4381-a5d6-4481611b1290 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.600941014Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c354f77-e88c-436a-9020-dd3cd634f0b7 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.601056059Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c354f77-e88c-436a-9020-dd3cd634f0b7 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.602699544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bca03644-efd9-41bb-beae-933581099496 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.603681591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261477603655176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bca03644-efd9-41bb-beae-933581099496 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.604488422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ef0fb97-d4b9-4888-93bd-00a1e3b4cdbe name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.604577808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ef0fb97-d4b9-4888-93bd-00a1e3b4cdbe name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:57 no-preload-566777 crio[707]: time="2024-07-29 13:57:57.604831217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260295916592638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25a1cef4fe62b959b63a0ea5ae0be4eed4725e01da6be6ed9dacb7746f95f58,PodSandboxId:e9a8ce40643081f6e59f9a61f7aff033a9be3f94aa76cd845223a2caa6fc48e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260289870153481,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 913d9c33-01b3-4966-bbfb-61a75f958c12,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e,PodSandboxId:7832ee370975d85e084f122eea8217b63855127b6b081fd616a2248e0ffae0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260286979051346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kkrqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d1ab6ca-6006-450e-8bef-bf9136e5e575,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5,PodSandboxId:e4dc76a0df61af3e2645a535a4bb8f57bcc8ab753db156306f1438b4c631b563,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260280024593988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
3074247-17ba-465c-8cfe-d0fcc0241468,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2,PodSandboxId:2deb18f3e0f2366e352621fe59598d9ba5d5a97c7fac5f61fe72c2220ce315a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722260279354446796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ql6wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ee6e47-c0f9-4c98-b294-3ee39b6278
84,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa,PodSandboxId:54a01ed813dbdb8b134b3e3b1ee549d6372ec3a9c7a3bae4bb92b7fa2ab228cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722260274694984594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb64324503455e84
4b1a6d605201625d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2,PodSandboxId:fafa613b78cb7bcf60fc41bf5938cb6e9a88e60b8eed1e4826aedb7a5c200694,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722260274618799766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c20f959dbbac974f49ab921fe8fe8
ecd,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e,PodSandboxId:73d98712f2ebca8b45b709f842cfb3d7c8ab64632387b153d245eef7d58c0e57,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722260274601969104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ba46991e39bfca6afa3f59eb02c317,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6,PodSandboxId:5f8674c0bd92cf295d8e1f6115e51d9e5fe7e4e961b82dbda1b957846c75ac68,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722260274564841493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-566777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c7239a3fdc31ee696d9e70cf015f9c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ef0fb97-d4b9-4888-93bd-00a1e3b4cdbe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5dcd5030f62fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       4                   e4dc76a0df61a       storage-provisioner
	a25a1cef4fe62       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   e9a8ce4064308       busybox
	5889da7fe3143       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   7832ee370975d       coredns-5cfdc65f69-kkrqd
	09fdadca1aa7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       3                   e4dc76a0df61a       storage-provisioner
	a2ed90bc70759       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      19 minutes ago      Running             kube-proxy                1                   2deb18f3e0f23       kube-proxy-ql6wf
	5c91d66f36628       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      20 minutes ago      Running             kube-controller-manager   1                   54a01ed813dbd       kube-controller-manager-no-preload-566777
	f08ba8d78f505       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      20 minutes ago      Running             kube-apiserver            1                   fafa613b78cb7       kube-apiserver-no-preload-566777
	6d236da3b529e       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      20 minutes ago      Running             kube-scheduler            1                   73d98712f2ebc       kube-scheduler-no-preload-566777
	f784cabd7fc33       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      20 minutes ago      Running             etcd                      1                   5f8674c0bd92c       etcd-no-preload-566777
	
	
	==> coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43951 - 28014 "HINFO IN 7181564784847732016.5453138748017200787. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009842007s
	
	
	==> describe nodes <==
	Name:               no-preload-566777
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-566777
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=no-preload-566777
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_29_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-566777
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:57:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:53:47 +0000   Mon, 29 Jul 2024 13:29:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:53:47 +0000   Mon, 29 Jul 2024 13:29:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:53:47 +0000   Mon, 29 Jul 2024 13:29:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:53:47 +0000   Mon, 29 Jul 2024 13:38:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.84
	  Hostname:    no-preload-566777
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f2be5108ba24204911e831586431a5d
	  System UUID:                7f2be510-8ba2-4204-911e-831586431a5d
	  Boot ID:                    1d18b67e-906a-4f97-b0b1-1bb083aa856d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5cfdc65f69-kkrqd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-566777                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-566777             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-566777    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-ql6wf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-566777             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-78fcd8795b-dv8pr              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-566777 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-566777 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-566777 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-566777 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-566777 event: Registered Node no-preload-566777 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-566777 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-566777 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-566777 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-566777 event: Registered Node no-preload-566777 in Controller
	
	
	==> dmesg <==
	[Jul29 13:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049799] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040707] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.748284] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.381971] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.578120] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.118438] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.066924] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057800] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.157541] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.126597] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.295899] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[ +15.081918] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +0.058288] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.470739] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +2.932394] kauditd_printk_skb: 97 callbacks suppressed
	[Jul29 13:38] kauditd_printk_skb: 42 callbacks suppressed
	[  +1.663493] systemd-fstab-generator[1969]: Ignoring "noauto" option for root device
	[  +4.370181] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] <==
	{"level":"info","ts":"2024-07-29T13:38:02.01661Z","caller":"traceutil/trace.go:171","msg":"trace[128830485] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:515; }","duration":"429.943218ms","start":"2024-07-29T13:38:01.586658Z","end":"2024-07-29T13:38:02.016602Z","steps":["trace[128830485] 'agreement among raft nodes before linearized reading'  (duration: 429.795017ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:02.016636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:38:01.586629Z","time spent":"430.001576ms","remote":"127.0.0.1:37672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":233,"request content":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" "}
	{"level":"warn","ts":"2024-07-29T13:38:02.016827Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"430.130031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:4008"}
	{"level":"info","ts":"2024-07-29T13:38:02.016849Z","caller":"traceutil/trace.go:171","msg":"trace[917067676] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:515; }","duration":"430.152914ms","start":"2024-07-29T13:38:01.586689Z","end":"2024-07-29T13:38:02.016842Z","steps":["trace[917067676] 'agreement among raft nodes before linearized reading'  (duration: 430.064169ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:02.016872Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:38:01.58668Z","time spent":"430.184966ms","remote":"127.0.0.1:37652","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":4032,"request content":"key:\"/registry/pods/kube-system/storage-provisioner\" "}
	{"level":"warn","ts":"2024-07-29T13:38:02.017542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"419.647574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T13:38:02.017581Z","caller":"traceutil/trace.go:171","msg":"trace[2023227524] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:515; }","duration":"419.68803ms","start":"2024-07-29T13:38:01.597879Z","end":"2024-07-29T13:38:02.017567Z","steps":["trace[2023227524] 'agreement among raft nodes before linearized reading'  (duration: 419.163268ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:02.017611Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:38:01.597851Z","time spent":"419.753786ms","remote":"127.0.0.1:37638","response type":"/etcdserverpb.KV/Range","request count":0,"request size":21,"response count":0,"response size":29,"request content":"key:\"/registry/minions\" limit:1 "}
	{"level":"warn","ts":"2024-07-29T13:38:02.017742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"428.809642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-07-29T13:38:02.017766Z","caller":"traceutil/trace.go:171","msg":"trace[288506604] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:515; }","duration":"428.832782ms","start":"2024-07-29T13:38:01.588927Z","end":"2024-07-29T13:38:02.01776Z","steps":["trace[288506604] 'agreement among raft nodes before linearized reading'  (duration: 428.787488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:02.017785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:38:01.588894Z","time spent":"428.886369ms","remote":"127.0.0.1:37672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":1,"response size":240,"request content":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" "}
	{"level":"warn","ts":"2024-07-29T13:38:25.260032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.044462ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7744661932870694189 > lease_revoke:<id:6b7a90feb63f748c>","response":"size:29"}
	{"level":"warn","ts":"2024-07-29T13:38:44.060332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.629894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-78fcd8795b-dv8pr\" ","response":"range_response_count:1 size:4383"}
	{"level":"info","ts":"2024-07-29T13:38:44.060488Z","caller":"traceutil/trace.go:171","msg":"trace[757519653] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-78fcd8795b-dv8pr; range_end:; response_count:1; response_revision:611; }","duration":"246.800895ms","start":"2024-07-29T13:38:43.813672Z","end":"2024-07-29T13:38:44.060473Z","steps":["trace[757519653] 'range keys from in-memory index tree'  (duration: 246.450546ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:38:44.06062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.817935ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T13:38:44.060655Z","caller":"traceutil/trace.go:171","msg":"trace[549234005] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:611; }","duration":"349.85894ms","start":"2024-07-29T13:38:43.710787Z","end":"2024-07-29T13:38:44.060646Z","steps":["trace[549234005] 'range keys from in-memory index tree'  (duration: 349.80977ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:47:56.393125Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":832}
	{"level":"info","ts":"2024-07-29T13:47:56.405695Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":832,"took":"10.749065ms","hash":1731901634,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2801664,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-07-29T13:47:56.406487Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1731901634,"revision":832,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T13:52:56.400297Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1074}
	{"level":"info","ts":"2024-07-29T13:52:56.404309Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1074,"took":"3.400805ms","hash":120406402,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1646592,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T13:52:56.40443Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":120406402,"revision":1074,"compact-revision":832}
	{"level":"info","ts":"2024-07-29T13:57:56.4075Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1318}
	{"level":"info","ts":"2024-07-29T13:57:56.411501Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1318,"took":"3.448311ms","hash":3173105456,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T13:57:56.411596Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3173105456,"revision":1318,"compact-revision":1074}
	
	
	==> kernel <==
	 13:57:57 up 20 min,  0 users,  load average: 0.04, 0.08, 0.09
	Linux no-preload-566777 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] <==
	W0729 13:52:59.089333       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 13:52:59.089545       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 13:52:59.090411       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 13:52:59.091242       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:53:59.090910       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 13:53:59.091037       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 13:53:59.092100       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:53:59.092231       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 13:53:59.092343       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 13:53:59.093650       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:55:59.092661       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 13:55:59.092788       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 13:55:59.093937       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:55:59.094020       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 13:55:59.094089       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 13:55:59.095307       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] <==
	E0729 13:52:34.548259       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:52:34.657053       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:53:04.561539       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:53:04.664896       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:53:34.567878       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:53:34.674005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 13:53:47.803119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-566777"
	I0729 13:54:00.931495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="347.365µs"
	E0729 13:54:04.575467       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:54:04.682244       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 13:54:14.928455       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="202.546µs"
	E0729 13:54:34.582797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:54:34.690106       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:55:04.589719       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:55:04.697292       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:55:34.596998       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:55:34.705094       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:56:04.603945       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:56:04.716238       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:56:34.610178       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:56:34.723744       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:57:04.617338       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:57:04.731502       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:57:34.625000       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 13:57:34.739339       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 13:37:59.915978       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 13:38:00.316057       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.84"]
	E0729 13:38:00.316465       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 13:38:00.360268       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 13:38:00.360463       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:38:00.360593       1 server_linux.go:170] "Using iptables Proxier"
	I0729 13:38:00.364051       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 13:38:00.364869       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 13:38:00.365032       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:38:00.370426       1 config.go:197] "Starting service config controller"
	I0729 13:38:00.370503       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:38:00.370735       1 config.go:104] "Starting endpoint slice config controller"
	I0729 13:38:00.370773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:38:00.374000       1 config.go:326] "Starting node config controller"
	I0729 13:38:00.374530       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:38:00.471107       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:38:00.471240       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:38:00.475517       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] <==
	I0729 13:37:55.520050       1 serving.go:386] Generated self-signed cert in-memory
	W0729 13:37:58.019105       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 13:37:58.019225       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 13:37:58.019268       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 13:37:58.019298       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 13:37:58.105624       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 13:37:58.105683       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:37:58.114496       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 13:37:58.114607       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 13:37:58.117636       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 13:37:58.117734       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 13:37:58.215051       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:55:53 no-preload-566777 kubelet[1282]: E0729 13:55:53.937764    1282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:55:53 no-preload-566777 kubelet[1282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:55:53 no-preload-566777 kubelet[1282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:55:53 no-preload-566777 kubelet[1282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:55:53 no-preload-566777 kubelet[1282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:56:01 no-preload-566777 kubelet[1282]: E0729 13:56:01.906856    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:56:12 no-preload-566777 kubelet[1282]: E0729 13:56:12.905856    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:56:24 no-preload-566777 kubelet[1282]: E0729 13:56:24.905251    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:56:36 no-preload-566777 kubelet[1282]: E0729 13:56:36.905113    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:56:51 no-preload-566777 kubelet[1282]: E0729 13:56:51.906921    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:56:53 no-preload-566777 kubelet[1282]: E0729 13:56:53.935331    1282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:56:53 no-preload-566777 kubelet[1282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:56:53 no-preload-566777 kubelet[1282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:56:53 no-preload-566777 kubelet[1282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:56:53 no-preload-566777 kubelet[1282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:57:06 no-preload-566777 kubelet[1282]: E0729 13:57:06.906150    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:57:17 no-preload-566777 kubelet[1282]: E0729 13:57:17.907466    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:57:29 no-preload-566777 kubelet[1282]: E0729 13:57:29.905653    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:57:43 no-preload-566777 kubelet[1282]: E0729 13:57:43.906933    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	Jul 29 13:57:53 no-preload-566777 kubelet[1282]: E0729 13:57:53.936280    1282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:57:53 no-preload-566777 kubelet[1282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:57:53 no-preload-566777 kubelet[1282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:57:53 no-preload-566777 kubelet[1282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:57:53 no-preload-566777 kubelet[1282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:57:56 no-preload-566777 kubelet[1282]: E0729 13:57:56.905585    1282 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dv8pr" podUID="0505f724-9244-4dca-9ade-6209131087e8"
	
	
	==> storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] <==
	I0729 13:38:00.543821       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 13:38:00.546637       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] <==
	I0729 13:38:16.016716       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 13:38:16.029786       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 13:38:16.030900       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 13:38:33.435978       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 13:38:33.436507       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"415690b2-bf97-40fe-a529-12c868c1546e", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-566777_4ff1a963-eaff-4771-9f52-7083647aaf80 became leader
	I0729 13:38:33.436834       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-566777_4ff1a963-eaff-4771-9f52-7083647aaf80!
	I0729 13:38:33.537275       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-566777_4ff1a963-eaff-4771-9f52-7083647aaf80!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-566777 -n no-preload-566777
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-566777 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-dv8pr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-566777 describe pod metrics-server-78fcd8795b-dv8pr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-566777 describe pod metrics-server-78fcd8795b-dv8pr: exit status 1 (65.950079ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-dv8pr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-566777 describe pod metrics-server-78fcd8795b-dv8pr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (384.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (384.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-135920 -n embed-certs-135920
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 13:58:41.520027545 +0000 UTC m=+6999.783382353
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-135920 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-135920 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.814µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-135920 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-135920 -n embed-certs-135920
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-135920 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-135920 logs -n 25: (1.332123653s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-507612 sudo find                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo crio                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-507612                                       | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-312895 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | disable-driver-mounts-312895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:30 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-135920            | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-566777             | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-972693  | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-135920                 | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-566777                  | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-924039        | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-566777 --memory=2200                     | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-972693       | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC | 29 Jul 24 13:43 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-924039             | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:57 UTC | 29 Jul 24 13:57 UTC |
	| start   | -p newest-cni-615666 --memory=2200 --alsologtostderr   | newest-cni-615666            | jenkins | v1.33.1 | 29 Jul 24 13:57 UTC | 29 Jul 24 13:58 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:57 UTC | 29 Jul 24 13:57 UTC |
	| addons  | enable metrics-server -p newest-cni-615666             | newest-cni-615666            | jenkins | v1.33.1 | 29 Jul 24 13:58 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:57:52
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:57:52.651094  307651 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:57:52.651199  307651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:57:52.651209  307651 out.go:304] Setting ErrFile to fd 2...
	I0729 13:57:52.651216  307651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:57:52.651386  307651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:57:52.652080  307651 out.go:298] Setting JSON to false
	I0729 13:57:52.653116  307651 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13216,"bootTime":1722248257,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:57:52.653175  307651 start.go:139] virtualization: kvm guest
	I0729 13:57:52.655592  307651 out.go:177] * [newest-cni-615666] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:57:52.656944  307651 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:57:52.656959  307651 notify.go:220] Checking for updates...
	I0729 13:57:52.659847  307651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:57:52.661267  307651 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:57:52.662468  307651 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:57:52.663832  307651 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:57:52.665019  307651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:57:52.666584  307651 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:57:52.666670  307651 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:57:52.666770  307651 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:57:52.666875  307651 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:57:52.703045  307651 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:57:52.704371  307651 start.go:297] selected driver: kvm2
	I0729 13:57:52.704386  307651 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:57:52.704397  307651 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:57:52.705201  307651 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:57:52.705269  307651 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:57:52.720770  307651 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:57:52.720844  307651 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 13:57:52.720875  307651 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 13:57:52.721176  307651 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 13:57:52.721212  307651 cni.go:84] Creating CNI manager for ""
	I0729 13:57:52.721238  307651 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:57:52.721253  307651 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 13:57:52.721332  307651 start.go:340] cluster config:
	{Name:newest-cni-615666 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-615666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:57:52.721469  307651 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:57:52.723313  307651 out.go:177] * Starting "newest-cni-615666" primary control-plane node in "newest-cni-615666" cluster
	I0729 13:57:52.724624  307651 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 13:57:52.724655  307651 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:57:52.724665  307651 cache.go:56] Caching tarball of preloaded images
	I0729 13:57:52.724744  307651 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:57:52.724758  307651 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 13:57:52.724889  307651 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/config.json ...
	I0729 13:57:52.724916  307651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/config.json: {Name:mk5d51a59524b27e545a3123b6e789ee822fbdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:57:52.725056  307651 start.go:360] acquireMachinesLock for newest-cni-615666: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:57:52.725085  307651 start.go:364] duration metric: took 15.841µs to acquireMachinesLock for "newest-cni-615666"
	I0729 13:57:52.725109  307651 start.go:93] Provisioning new machine with config: &{Name:newest-cni-615666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-615666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:57:52.725174  307651 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 13:57:52.727553  307651 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 13:57:52.727679  307651 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:57:52.727712  307651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:57:52.742440  307651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0729 13:57:52.742869  307651 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:57:52.743425  307651 main.go:141] libmachine: Using API Version  1
	I0729 13:57:52.743446  307651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:57:52.743734  307651 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:57:52.743928  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetMachineName
	I0729 13:57:52.744068  307651 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:57:52.744217  307651 start.go:159] libmachine.API.Create for "newest-cni-615666" (driver="kvm2")
	I0729 13:57:52.744247  307651 client.go:168] LocalClient.Create starting
	I0729 13:57:52.744272  307651 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem
	I0729 13:57:52.744305  307651 main.go:141] libmachine: Decoding PEM data...
	I0729 13:57:52.744321  307651 main.go:141] libmachine: Parsing certificate...
	I0729 13:57:52.744385  307651 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem
	I0729 13:57:52.744409  307651 main.go:141] libmachine: Decoding PEM data...
	I0729 13:57:52.744419  307651 main.go:141] libmachine: Parsing certificate...
	I0729 13:57:52.744436  307651 main.go:141] libmachine: Running pre-create checks...
	I0729 13:57:52.744451  307651 main.go:141] libmachine: (newest-cni-615666) Calling .PreCreateCheck
	I0729 13:57:52.744829  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetConfigRaw
	I0729 13:57:52.745215  307651 main.go:141] libmachine: Creating machine...
	I0729 13:57:52.745228  307651 main.go:141] libmachine: (newest-cni-615666) Calling .Create
	I0729 13:57:52.745359  307651 main.go:141] libmachine: (newest-cni-615666) Creating KVM machine...
	I0729 13:57:52.746661  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found existing default KVM network
	I0729 13:57:52.748263  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:52.748122  307673 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027c220}
	I0729 13:57:52.748287  307651 main.go:141] libmachine: (newest-cni-615666) DBG | created network xml: 
	I0729 13:57:52.748300  307651 main.go:141] libmachine: (newest-cni-615666) DBG | <network>
	I0729 13:57:52.748309  307651 main.go:141] libmachine: (newest-cni-615666) DBG |   <name>mk-newest-cni-615666</name>
	I0729 13:57:52.748318  307651 main.go:141] libmachine: (newest-cni-615666) DBG |   <dns enable='no'/>
	I0729 13:57:52.748325  307651 main.go:141] libmachine: (newest-cni-615666) DBG |   
	I0729 13:57:52.748340  307651 main.go:141] libmachine: (newest-cni-615666) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 13:57:52.748362  307651 main.go:141] libmachine: (newest-cni-615666) DBG |     <dhcp>
	I0729 13:57:52.748386  307651 main.go:141] libmachine: (newest-cni-615666) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 13:57:52.748394  307651 main.go:141] libmachine: (newest-cni-615666) DBG |     </dhcp>
	I0729 13:57:52.748400  307651 main.go:141] libmachine: (newest-cni-615666) DBG |   </ip>
	I0729 13:57:52.748407  307651 main.go:141] libmachine: (newest-cni-615666) DBG |   
	I0729 13:57:52.748415  307651 main.go:141] libmachine: (newest-cni-615666) DBG | </network>
	I0729 13:57:52.748425  307651 main.go:141] libmachine: (newest-cni-615666) DBG | 
	I0729 13:57:52.753772  307651 main.go:141] libmachine: (newest-cni-615666) DBG | trying to create private KVM network mk-newest-cni-615666 192.168.39.0/24...
	I0729 13:57:52.824571  307651 main.go:141] libmachine: (newest-cni-615666) DBG | private KVM network mk-newest-cni-615666 192.168.39.0/24 created
	I0729 13:57:52.824609  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:52.824563  307673 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:57:52.824623  307651 main.go:141] libmachine: (newest-cni-615666) Setting up store path in /home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666 ...
	I0729 13:57:52.824639  307651 main.go:141] libmachine: (newest-cni-615666) Building disk image from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:57:52.824745  307651 main.go:141] libmachine: (newest-cni-615666) Downloading /home/jenkins/minikube-integration/19341-233093/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:57:53.124984  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:53.124837  307673 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa...
	I0729 13:57:53.212964  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:53.212788  307673 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/newest-cni-615666.rawdisk...
	I0729 13:57:53.213001  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Writing magic tar header
	I0729 13:57:53.213024  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Writing SSH key tar header
	I0729 13:57:53.213083  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:53.213023  307673 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666 ...
	I0729 13:57:53.213146  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666
	I0729 13:57:53.213188  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube/machines
	I0729 13:57:53.213214  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:57:53.213225  307651 main.go:141] libmachine: (newest-cni-615666) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666 (perms=drwx------)
	I0729 13:57:53.213240  307651 main.go:141] libmachine: (newest-cni-615666) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:57:53.213255  307651 main.go:141] libmachine: (newest-cni-615666) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093/.minikube (perms=drwxr-xr-x)
	I0729 13:57:53.213268  307651 main.go:141] libmachine: (newest-cni-615666) Setting executable bit set on /home/jenkins/minikube-integration/19341-233093 (perms=drwxrwxr-x)
	I0729 13:57:53.213278  307651 main.go:141] libmachine: (newest-cni-615666) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:57:53.213288  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19341-233093
	I0729 13:57:53.213297  307651 main.go:141] libmachine: (newest-cni-615666) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:57:53.213308  307651 main.go:141] libmachine: (newest-cni-615666) Creating domain...
	I0729 13:57:53.213324  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:57:53.213338  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:57:53.213348  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Checking permissions on dir: /home
	I0729 13:57:53.213357  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Skipping /home - not owner
	I0729 13:57:53.214501  307651 main.go:141] libmachine: (newest-cni-615666) define libvirt domain using xml: 
	I0729 13:57:53.214536  307651 main.go:141] libmachine: (newest-cni-615666) <domain type='kvm'>
	I0729 13:57:53.214546  307651 main.go:141] libmachine: (newest-cni-615666)   <name>newest-cni-615666</name>
	I0729 13:57:53.214559  307651 main.go:141] libmachine: (newest-cni-615666)   <memory unit='MiB'>2200</memory>
	I0729 13:57:53.214569  307651 main.go:141] libmachine: (newest-cni-615666)   <vcpu>2</vcpu>
	I0729 13:57:53.214580  307651 main.go:141] libmachine: (newest-cni-615666)   <features>
	I0729 13:57:53.214589  307651 main.go:141] libmachine: (newest-cni-615666)     <acpi/>
	I0729 13:57:53.214599  307651 main.go:141] libmachine: (newest-cni-615666)     <apic/>
	I0729 13:57:53.214607  307651 main.go:141] libmachine: (newest-cni-615666)     <pae/>
	I0729 13:57:53.214615  307651 main.go:141] libmachine: (newest-cni-615666)     
	I0729 13:57:53.214627  307651 main.go:141] libmachine: (newest-cni-615666)   </features>
	I0729 13:57:53.214636  307651 main.go:141] libmachine: (newest-cni-615666)   <cpu mode='host-passthrough'>
	I0729 13:57:53.214666  307651 main.go:141] libmachine: (newest-cni-615666)   
	I0729 13:57:53.214684  307651 main.go:141] libmachine: (newest-cni-615666)   </cpu>
	I0729 13:57:53.214690  307651 main.go:141] libmachine: (newest-cni-615666)   <os>
	I0729 13:57:53.214699  307651 main.go:141] libmachine: (newest-cni-615666)     <type>hvm</type>
	I0729 13:57:53.214712  307651 main.go:141] libmachine: (newest-cni-615666)     <boot dev='cdrom'/>
	I0729 13:57:53.214724  307651 main.go:141] libmachine: (newest-cni-615666)     <boot dev='hd'/>
	I0729 13:57:53.214734  307651 main.go:141] libmachine: (newest-cni-615666)     <bootmenu enable='no'/>
	I0729 13:57:53.214745  307651 main.go:141] libmachine: (newest-cni-615666)   </os>
	I0729 13:57:53.214761  307651 main.go:141] libmachine: (newest-cni-615666)   <devices>
	I0729 13:57:53.214772  307651 main.go:141] libmachine: (newest-cni-615666)     <disk type='file' device='cdrom'>
	I0729 13:57:53.214789  307651 main.go:141] libmachine: (newest-cni-615666)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/boot2docker.iso'/>
	I0729 13:57:53.214803  307651 main.go:141] libmachine: (newest-cni-615666)       <target dev='hdc' bus='scsi'/>
	I0729 13:57:53.214813  307651 main.go:141] libmachine: (newest-cni-615666)       <readonly/>
	I0729 13:57:53.214822  307651 main.go:141] libmachine: (newest-cni-615666)     </disk>
	I0729 13:57:53.214835  307651 main.go:141] libmachine: (newest-cni-615666)     <disk type='file' device='disk'>
	I0729 13:57:53.214849  307651 main.go:141] libmachine: (newest-cni-615666)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:57:53.214866  307651 main.go:141] libmachine: (newest-cni-615666)       <source file='/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/newest-cni-615666.rawdisk'/>
	I0729 13:57:53.214883  307651 main.go:141] libmachine: (newest-cni-615666)       <target dev='hda' bus='virtio'/>
	I0729 13:57:53.214896  307651 main.go:141] libmachine: (newest-cni-615666)     </disk>
	I0729 13:57:53.214907  307651 main.go:141] libmachine: (newest-cni-615666)     <interface type='network'>
	I0729 13:57:53.214918  307651 main.go:141] libmachine: (newest-cni-615666)       <source network='mk-newest-cni-615666'/>
	I0729 13:57:53.214929  307651 main.go:141] libmachine: (newest-cni-615666)       <model type='virtio'/>
	I0729 13:57:53.214941  307651 main.go:141] libmachine: (newest-cni-615666)     </interface>
	I0729 13:57:53.214975  307651 main.go:141] libmachine: (newest-cni-615666)     <interface type='network'>
	I0729 13:57:53.214988  307651 main.go:141] libmachine: (newest-cni-615666)       <source network='default'/>
	I0729 13:57:53.215007  307651 main.go:141] libmachine: (newest-cni-615666)       <model type='virtio'/>
	I0729 13:57:53.215017  307651 main.go:141] libmachine: (newest-cni-615666)     </interface>
	I0729 13:57:53.215033  307651 main.go:141] libmachine: (newest-cni-615666)     <serial type='pty'>
	I0729 13:57:53.215068  307651 main.go:141] libmachine: (newest-cni-615666)       <target port='0'/>
	I0729 13:57:53.215091  307651 main.go:141] libmachine: (newest-cni-615666)     </serial>
	I0729 13:57:53.215105  307651 main.go:141] libmachine: (newest-cni-615666)     <console type='pty'>
	I0729 13:57:53.215120  307651 main.go:141] libmachine: (newest-cni-615666)       <target type='serial' port='0'/>
	I0729 13:57:53.215132  307651 main.go:141] libmachine: (newest-cni-615666)     </console>
	I0729 13:57:53.215143  307651 main.go:141] libmachine: (newest-cni-615666)     <rng model='virtio'>
	I0729 13:57:53.215157  307651 main.go:141] libmachine: (newest-cni-615666)       <backend model='random'>/dev/random</backend>
	I0729 13:57:53.215167  307651 main.go:141] libmachine: (newest-cni-615666)     </rng>
	I0729 13:57:53.215177  307651 main.go:141] libmachine: (newest-cni-615666)     
	I0729 13:57:53.215187  307651 main.go:141] libmachine: (newest-cni-615666)     
	I0729 13:57:53.215213  307651 main.go:141] libmachine: (newest-cni-615666)   </devices>
	I0729 13:57:53.215234  307651 main.go:141] libmachine: (newest-cni-615666) </domain>
	I0729 13:57:53.215247  307651 main.go:141] libmachine: (newest-cni-615666) 
	I0729 13:57:53.219196  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:3b:27:22 in network default
	I0729 13:57:53.219785  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:57:53.219803  307651 main.go:141] libmachine: (newest-cni-615666) Ensuring networks are active...
	I0729 13:57:53.220493  307651 main.go:141] libmachine: (newest-cni-615666) Ensuring network default is active
	I0729 13:57:53.220907  307651 main.go:141] libmachine: (newest-cni-615666) Ensuring network mk-newest-cni-615666 is active
	I0729 13:57:53.221375  307651 main.go:141] libmachine: (newest-cni-615666) Getting domain xml...
	I0729 13:57:53.222054  307651 main.go:141] libmachine: (newest-cni-615666) Creating domain...
	I0729 13:57:54.486907  307651 main.go:141] libmachine: (newest-cni-615666) Waiting to get IP...
	I0729 13:57:54.487726  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:57:54.488219  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:57:54.488298  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:54.488206  307673 retry.go:31] will retry after 224.825413ms: waiting for machine to come up
	I0729 13:57:54.714736  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:57:54.715293  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:57:54.715319  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:54.715232  307673 retry.go:31] will retry after 250.510372ms: waiting for machine to come up
	I0729 13:57:54.967589  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:57:54.968048  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:57:54.968100  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:54.968004  307673 retry.go:31] will retry after 335.831428ms: waiting for machine to come up
	I0729 13:57:55.305558  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:57:55.306139  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:57:55.306172  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:55.306080  307673 retry.go:31] will retry after 446.030984ms: waiting for machine to come up
	I0729 13:57:55.753442  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:57:55.753936  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:57:55.753968  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:55.753855  307673 retry.go:31] will retry after 615.423851ms: waiting for machine to come up
	I0729 13:57:56.370679  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:57:56.371089  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:57:56.371115  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:56.371031  307673 retry.go:31] will retry after 723.248359ms: waiting for machine to come up
	I0729 13:57:57.095633  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:57:57.096133  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:57:57.096162  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:57.096088  307673 retry.go:31] will retry after 1.100417203s: waiting for machine to come up
	I0729 13:57:58.198663  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:57:58.199273  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:57:58.199301  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:58.199217  307673 retry.go:31] will retry after 989.794868ms: waiting for machine to come up
	I0729 13:57:59.412987  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:57:59.413478  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:57:59.413589  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:57:59.413446  307673 retry.go:31] will retry after 1.184173913s: waiting for machine to come up
	I0729 13:58:00.599783  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:00.600230  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:00.600285  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:00.600199  307673 retry.go:31] will retry after 1.753890387s: waiting for machine to come up
	I0729 13:58:02.355802  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:02.356347  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:02.356379  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:02.356293  307673 retry.go:31] will retry after 2.306064428s: waiting for machine to come up
	I0729 13:58:04.665745  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:04.666162  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:04.666196  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:04.666101  307673 retry.go:31] will retry after 2.495938556s: waiting for machine to come up
	I0729 13:58:07.163199  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:07.163640  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:07.163666  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:07.163585  307673 retry.go:31] will retry after 3.981701427s: waiting for machine to come up
	I0729 13:58:11.147087  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:11.147567  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:11.147591  307651 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:11.147536  307673 retry.go:31] will retry after 4.762123994s: waiting for machine to come up
	I0729 13:58:15.913331  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:15.913873  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has current primary IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:15.913891  307651 main.go:141] libmachine: (newest-cni-615666) Found IP for machine: 192.168.39.244
	I0729 13:58:15.913903  307651 main.go:141] libmachine: (newest-cni-615666) Reserving static IP address...
	I0729 13:58:15.914347  307651 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find host DHCP lease matching {name: "newest-cni-615666", mac: "52:54:00:1a:dc:f2", ip: "192.168.39.244"} in network mk-newest-cni-615666
	I0729 13:58:15.993361  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Getting to WaitForSSH function...
	I0729 13:58:15.993397  307651 main.go:141] libmachine: (newest-cni-615666) Reserved static IP address: 192.168.39.244
	I0729 13:58:15.993411  307651 main.go:141] libmachine: (newest-cni-615666) Waiting for SSH to be available...
	I0729 13:58:15.996041  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:15.996400  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:15.996430  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:15.996572  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Using SSH client type: external
	I0729 13:58:15.996617  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa (-rw-------)
	I0729 13:58:15.996656  307651 main.go:141] libmachine: (newest-cni-615666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:58:15.996670  307651 main.go:141] libmachine: (newest-cni-615666) DBG | About to run SSH command:
	I0729 13:58:15.996693  307651 main.go:141] libmachine: (newest-cni-615666) DBG | exit 0
	I0729 13:58:16.120966  307651 main.go:141] libmachine: (newest-cni-615666) DBG | SSH cmd err, output: <nil>: 
	I0729 13:58:16.121293  307651 main.go:141] libmachine: (newest-cni-615666) KVM machine creation complete!
	I0729 13:58:16.121608  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetConfigRaw
	I0729 13:58:16.122254  307651 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:16.122501  307651 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:16.122672  307651 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:58:16.122687  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetState
	I0729 13:58:16.124215  307651 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:58:16.124236  307651 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:58:16.124245  307651 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:58:16.124258  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:16.126737  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.127121  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:16.127148  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.127300  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:16.127480  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:16.127641  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:16.127869  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:16.128069  307651 main.go:141] libmachine: Using SSH client type: native
	I0729 13:58:16.128257  307651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:58:16.128268  307651 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:58:16.228148  307651 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:58:16.228182  307651 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:58:16.228194  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:16.230785  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.231191  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:16.231222  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.231378  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:16.231602  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:16.231779  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:16.231925  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:16.232116  307651 main.go:141] libmachine: Using SSH client type: native
	I0729 13:58:16.232298  307651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:58:16.232310  307651 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:58:16.333750  307651 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:58:16.333848  307651 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:58:16.333862  307651 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:58:16.333871  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetMachineName
	I0729 13:58:16.334128  307651 buildroot.go:166] provisioning hostname "newest-cni-615666"
	I0729 13:58:16.334159  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetMachineName
	I0729 13:58:16.334357  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:16.336858  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.337205  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:16.337234  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.337381  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:16.337554  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:16.337723  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:16.337877  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:16.338020  307651 main.go:141] libmachine: Using SSH client type: native
	I0729 13:58:16.338236  307651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:58:16.338252  307651 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-615666 && echo "newest-cni-615666" | sudo tee /etc/hostname
	I0729 13:58:16.451658  307651 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-615666
	
	I0729 13:58:16.451699  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:16.454723  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.455112  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:16.455156  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.455384  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:16.455566  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:16.455738  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:16.455948  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:16.456081  307651 main.go:141] libmachine: Using SSH client type: native
	I0729 13:58:16.456240  307651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:58:16.456261  307651 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-615666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-615666/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-615666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:58:16.566635  307651 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:58:16.566683  307651 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:58:16.566702  307651 buildroot.go:174] setting up certificates
	I0729 13:58:16.566712  307651 provision.go:84] configureAuth start
	I0729 13:58:16.566720  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetMachineName
	I0729 13:58:16.567073  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetIP
	I0729 13:58:16.569897  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.570282  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:16.570314  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.570425  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:16.572590  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.572944  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:16.572977  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.573051  307651 provision.go:143] copyHostCerts
	I0729 13:58:16.573141  307651 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:58:16.573153  307651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:58:16.573216  307651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:58:16.573316  307651 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:58:16.573324  307651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:58:16.573349  307651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:58:16.573423  307651 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:58:16.573430  307651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:58:16.573451  307651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:58:16.573509  307651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.newest-cni-615666 san=[127.0.0.1 192.168.39.244 localhost minikube newest-cni-615666]
	I0729 13:58:16.851655  307651 provision.go:177] copyRemoteCerts
	I0729 13:58:16.851730  307651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:58:16.851772  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:16.854310  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.854659  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:16.854694  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:16.854872  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:16.855086  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:16.855276  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:16.855430  307651 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa Username:docker}
	I0729 13:58:16.936640  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:58:16.961024  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:58:16.983777  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:58:17.006687  307651 provision.go:87] duration metric: took 439.959447ms to configureAuth
	I0729 13:58:17.006723  307651 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:58:17.006899  307651 config.go:182] Loaded profile config "newest-cni-615666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:58:17.006973  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:17.009414  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.009713  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:17.009746  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.009886  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:17.010099  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:17.010268  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:17.010412  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:17.010584  307651 main.go:141] libmachine: Using SSH client type: native
	I0729 13:58:17.010800  307651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:58:17.010817  307651 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:58:17.285560  307651 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:58:17.285600  307651 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:58:17.285611  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetURL
	I0729 13:58:17.287043  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Using libvirt version 6000000
	I0729 13:58:17.288943  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.289289  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:17.289317  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.289493  307651 main.go:141] libmachine: Docker is up and running!
	I0729 13:58:17.289508  307651 main.go:141] libmachine: Reticulating splines...
	I0729 13:58:17.289518  307651 client.go:171] duration metric: took 24.54526213s to LocalClient.Create
	I0729 13:58:17.289549  307651 start.go:167] duration metric: took 24.545332279s to libmachine.API.Create "newest-cni-615666"
	I0729 13:58:17.289562  307651 start.go:293] postStartSetup for "newest-cni-615666" (driver="kvm2")
	I0729 13:58:17.289579  307651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:58:17.289602  307651 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:17.289813  307651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:58:17.289835  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:17.291938  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.292289  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:17.292316  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.292460  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:17.292635  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:17.292778  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:17.292934  307651 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa Username:docker}
	I0729 13:58:17.375642  307651 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:58:17.379791  307651 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:58:17.379828  307651 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:58:17.379914  307651 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:58:17.380026  307651 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:58:17.380163  307651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:58:17.391638  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:58:17.416032  307651 start.go:296] duration metric: took 126.455577ms for postStartSetup
	I0729 13:58:17.416088  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetConfigRaw
	I0729 13:58:17.416779  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetIP
	I0729 13:58:17.419273  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.419578  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:17.419611  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.419828  307651 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/config.json ...
	I0729 13:58:17.420068  307651 start.go:128] duration metric: took 24.694879441s to createHost
	I0729 13:58:17.420101  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:17.422312  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.422682  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:17.422714  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.422913  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:17.423114  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:17.423271  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:17.423405  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:17.423558  307651 main.go:141] libmachine: Using SSH client type: native
	I0729 13:58:17.423781  307651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:58:17.423797  307651 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:58:17.525681  307651 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722261497.492227069
	
	I0729 13:58:17.525702  307651 fix.go:216] guest clock: 1722261497.492227069
	I0729 13:58:17.525709  307651 fix.go:229] Guest: 2024-07-29 13:58:17.492227069 +0000 UTC Remote: 2024-07-29 13:58:17.420086051 +0000 UTC m=+24.804127721 (delta=72.141018ms)
	I0729 13:58:17.525730  307651 fix.go:200] guest clock delta is within tolerance: 72.141018ms
	I0729 13:58:17.525734  307651 start.go:83] releasing machines lock for "newest-cni-615666", held for 24.800641684s
	I0729 13:58:17.525753  307651 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:17.526032  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetIP
	I0729 13:58:17.528332  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.528642  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:17.528671  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.528780  307651 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:17.529295  307651 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:17.529496  307651 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:17.529586  307651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:58:17.529644  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:17.529753  307651 ssh_runner.go:195] Run: cat /version.json
	I0729 13:58:17.529778  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:17.532379  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.532429  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.532778  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:17.532839  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.532870  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:17.532886  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:17.532978  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:17.533141  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:17.533226  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:17.533406  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:17.533429  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:17.533608  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:17.533631  307651 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa Username:docker}
	I0729 13:58:17.533774  307651 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa Username:docker}
	I0729 13:58:17.630370  307651 ssh_runner.go:195] Run: systemctl --version
	I0729 13:58:17.636461  307651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:58:17.797625  307651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:58:17.804356  307651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:58:17.804448  307651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:58:17.821358  307651 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:58:17.821389  307651 start.go:495] detecting cgroup driver to use...
	I0729 13:58:17.821482  307651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:58:17.841517  307651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:58:17.858673  307651 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:58:17.858763  307651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:58:17.874861  307651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:58:17.890652  307651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:58:18.005971  307651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:58:18.152976  307651 docker.go:233] disabling docker service ...
	I0729 13:58:18.153066  307651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:58:18.168746  307651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:58:18.181512  307651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:58:18.329534  307651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:58:18.459326  307651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:58:18.473050  307651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:58:18.493053  307651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 13:58:18.493127  307651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:58:18.503868  307651 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:58:18.503935  307651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:58:18.514592  307651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:58:18.524771  307651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:58:18.536514  307651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:58:18.547290  307651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:58:18.557949  307651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:58:18.574591  307651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:58:18.584831  307651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:58:18.593888  307651 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:58:18.593947  307651 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:58:18.606955  307651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:58:18.615968  307651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:58:18.739300  307651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:58:18.872084  307651 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:58:18.872191  307651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:58:18.877406  307651 start.go:563] Will wait 60s for crictl version
	I0729 13:58:18.877480  307651 ssh_runner.go:195] Run: which crictl
	I0729 13:58:18.881427  307651 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:58:18.921579  307651 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:58:18.921678  307651 ssh_runner.go:195] Run: crio --version
	I0729 13:58:18.950423  307651 ssh_runner.go:195] Run: crio --version
	I0729 13:58:18.981959  307651 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 13:58:18.983311  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetIP
	I0729 13:58:18.986302  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:18.986668  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:18.986692  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:18.986918  307651 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:58:18.991072  307651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:58:19.004735  307651 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0729 13:58:19.006003  307651 kubeadm.go:883] updating cluster {Name:newest-cni-615666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-615666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:58:19.006145  307651 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 13:58:19.006216  307651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:58:19.039847  307651 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 13:58:19.039918  307651 ssh_runner.go:195] Run: which lz4
	I0729 13:58:19.044090  307651 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:58:19.048285  307651 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:58:19.048328  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0729 13:58:20.428190  307651 crio.go:462] duration metric: took 1.384141592s to copy over tarball
	I0729 13:58:20.428291  307651 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:58:22.395183  307651 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.966852144s)
	I0729 13:58:22.395218  307651 crio.go:469] duration metric: took 1.966989288s to extract the tarball
	I0729 13:58:22.395229  307651 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:58:22.433431  307651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:58:22.478607  307651 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:58:22.478638  307651 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:58:22.478649  307651 kubeadm.go:934] updating node { 192.168.39.244 8443 v1.31.0-beta.0 crio true true} ...
	I0729 13:58:22.478804  307651 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-615666 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-615666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:58:22.478880  307651 ssh_runner.go:195] Run: crio config
	I0729 13:58:22.526330  307651 cni.go:84] Creating CNI manager for ""
	I0729 13:58:22.526351  307651 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:58:22.526361  307651 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0729 13:58:22.526385  307651 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-615666 NodeName:newest-cni-615666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:58:22.526547  307651 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-615666"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:58:22.526620  307651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 13:58:22.536462  307651 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:58:22.536535  307651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:58:22.546415  307651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0729 13:58:22.564976  307651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 13:58:22.582206  307651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0729 13:58:22.602056  307651 ssh_runner.go:195] Run: grep 192.168.39.244	control-plane.minikube.internal$ /etc/hosts
	I0729 13:58:22.606229  307651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:58:22.618918  307651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:58:22.743949  307651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:58:22.762145  307651 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666 for IP: 192.168.39.244
	I0729 13:58:22.762175  307651 certs.go:194] generating shared ca certs ...
	I0729 13:58:22.762198  307651 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:58:22.762406  307651 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:58:22.762458  307651 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:58:22.762472  307651 certs.go:256] generating profile certs ...
	I0729 13:58:22.762569  307651 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/client.key
	I0729 13:58:22.762590  307651 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/client.crt with IP's: []
	I0729 13:58:23.259670  307651 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/client.crt ...
	I0729 13:58:23.259704  307651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/client.crt: {Name:mk06f82ccc2e0e8e6e9bc525c9b276d6a8ddd5f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:58:23.259888  307651 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/client.key ...
	I0729 13:58:23.259900  307651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/client.key: {Name:mkdf6c7b6827baab3fb5f157f4f9541c2c1444a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:58:23.259973  307651 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.key.4058dbf4
	I0729 13:58:23.259988  307651 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.crt.4058dbf4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244]
	I0729 13:58:23.357579  307651 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.crt.4058dbf4 ...
	I0729 13:58:23.357604  307651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.crt.4058dbf4: {Name:mk8632f144bb22c2aa5cde4d6dcf93ec9e80852c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:58:23.357759  307651 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.key.4058dbf4 ...
	I0729 13:58:23.357772  307651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.key.4058dbf4: {Name:mk23855371afa84d6cddcd74889a626671295164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:58:23.357840  307651 certs.go:381] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.crt.4058dbf4 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.crt
	I0729 13:58:23.357931  307651 certs.go:385] copying /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.key.4058dbf4 -> /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.key
	I0729 13:58:23.357990  307651 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/proxy-client.key
	I0729 13:58:23.358005  307651 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/proxy-client.crt with IP's: []
	I0729 13:58:23.463101  307651 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/proxy-client.crt ...
	I0729 13:58:23.463131  307651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/proxy-client.crt: {Name:mkb6e0a4730255f9f565274dc3a82aeab051050d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:58:23.463281  307651 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/proxy-client.key ...
	I0729 13:58:23.463293  307651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/proxy-client.key: {Name:mk4de77b80e7bf293a2e784938b2ba97d299d207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:58:23.463463  307651 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:58:23.463500  307651 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:58:23.463509  307651 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:58:23.463533  307651 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:58:23.463555  307651 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:58:23.463577  307651 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:58:23.463618  307651 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:58:23.464898  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:58:23.493900  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:58:23.525102  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:58:23.573608  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:58:23.599457  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:58:23.622339  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:58:23.644645  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:58:23.666861  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:58:23.689784  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:58:23.712476  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:58:23.734584  307651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:58:23.757836  307651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:58:23.774410  307651 ssh_runner.go:195] Run: openssl version
	I0729 13:58:23.780176  307651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:58:23.790550  307651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:58:23.795029  307651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:58:23.795097  307651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:58:23.800807  307651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:58:23.811132  307651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:58:23.821338  307651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:58:23.825730  307651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:58:23.825787  307651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:58:23.831186  307651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:58:23.841438  307651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:58:23.851541  307651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:58:23.856313  307651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:58:23.856370  307651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:58:23.862269  307651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:58:23.873063  307651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:58:23.877143  307651 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:58:23.877205  307651 kubeadm.go:392] StartCluster: {Name:newest-cni-615666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-615666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:58:23.877324  307651 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:58:23.877374  307651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:58:23.919977  307651 cri.go:89] found id: ""
	I0729 13:58:23.920068  307651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:58:23.929976  307651 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:58:23.939427  307651 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:58:23.950169  307651 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:58:23.950189  307651 kubeadm.go:157] found existing configuration files:
	
	I0729 13:58:23.950236  307651 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:58:23.960666  307651 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:58:23.960739  307651 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:58:23.970617  307651 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:58:23.980451  307651 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:58:23.980519  307651 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:58:23.997412  307651 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:58:24.007247  307651 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:58:24.007316  307651 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:58:24.016701  307651 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:58:24.027021  307651 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:58:24.027076  307651 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:58:24.038445  307651 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:58:24.151405  307651 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 13:58:24.151523  307651 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:58:24.267298  307651 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:58:24.267507  307651 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:58:24.267651  307651 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 13:58:24.276402  307651 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:58:24.414392  307651 out.go:204]   - Generating certificates and keys ...
	I0729 13:58:24.414522  307651 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:58:24.414668  307651 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:58:24.558826  307651 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 13:58:24.755391  307651 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 13:58:25.036293  307651 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 13:58:25.555350  307651 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 13:58:25.665736  307651 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 13:58:25.666054  307651 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-615666] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0729 13:58:25.769945  307651 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 13:58:25.770200  307651 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-615666] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0729 13:58:25.883552  307651 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 13:58:26.135528  307651 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 13:58:26.525460  307651 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 13:58:26.525711  307651 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:58:26.627228  307651 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:58:26.839537  307651 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:58:26.982682  307651 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:58:27.108801  307651 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:58:27.309835  307651 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:58:27.310399  307651 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:58:27.313725  307651 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:58:27.362597  307651 out.go:204]   - Booting up control plane ...
	I0729 13:58:27.362740  307651 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:58:27.362831  307651 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:58:27.362908  307651 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:58:27.363048  307651 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:58:27.363180  307651 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:58:27.363256  307651 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:58:27.471943  307651 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:58:27.472094  307651 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:58:28.472704  307651 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001413531s
	I0729 13:58:28.472871  307651 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:58:33.477259  307651 kubeadm.go:310] [api-check] The API server is healthy after 5.007336591s
	I0729 13:58:33.493522  307651 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:58:33.520688  307651 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:58:33.551184  307651 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:58:33.551483  307651 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-615666 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:58:33.563596  307651 kubeadm.go:310] [bootstrap-token] Using token: pp8q7w.kchk5n6a8481u9hf
	I0729 13:58:33.564983  307651 out.go:204]   - Configuring RBAC rules ...
	I0729 13:58:33.565113  307651 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:58:33.575507  307651 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:58:33.605761  307651 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:58:33.614435  307651 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:58:33.618812  307651 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:58:33.622839  307651 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:58:33.885961  307651 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:58:34.324972  307651 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:58:34.884042  307651 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:58:34.884064  307651 kubeadm.go:310] 
	I0729 13:58:34.884158  307651 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:58:34.884182  307651 kubeadm.go:310] 
	I0729 13:58:34.884299  307651 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:58:34.884327  307651 kubeadm.go:310] 
	I0729 13:58:34.884376  307651 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:58:34.884461  307651 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:58:34.884545  307651 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:58:34.884559  307651 kubeadm.go:310] 
	I0729 13:58:34.884630  307651 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:58:34.884640  307651 kubeadm.go:310] 
	I0729 13:58:34.884697  307651 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:58:34.884706  307651 kubeadm.go:310] 
	I0729 13:58:34.884748  307651 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:58:34.884882  307651 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:58:34.884962  307651 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:58:34.884974  307651 kubeadm.go:310] 
	I0729 13:58:34.885041  307651 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:58:34.885106  307651 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:58:34.885124  307651 kubeadm.go:310] 
	I0729 13:58:34.885244  307651 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pp8q7w.kchk5n6a8481u9hf \
	I0729 13:58:34.885384  307651 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 \
	I0729 13:58:34.885415  307651 kubeadm.go:310] 	--control-plane 
	I0729 13:58:34.885424  307651 kubeadm.go:310] 
	I0729 13:58:34.885544  307651 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:58:34.885556  307651 kubeadm.go:310] 
	I0729 13:58:34.885660  307651 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pp8q7w.kchk5n6a8481u9hf \
	I0729 13:58:34.885791  307651 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 
	I0729 13:58:34.886948  307651 kubeadm.go:310] W0729 13:58:24.117575     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 13:58:34.887298  307651 kubeadm.go:310] W0729 13:58:24.118452     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 13:58:34.887404  307651 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:58:34.887419  307651 cni.go:84] Creating CNI manager for ""
	I0729 13:58:34.887426  307651 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:58:34.889201  307651 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:58:34.890325  307651 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:58:34.901136  307651 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:58:34.919965  307651 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:58:34.920054  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:34.920067  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-615666 minikube.k8s.io/updated_at=2024_07_29T13_58_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=newest-cni-615666 minikube.k8s.io/primary=true
	I0729 13:58:35.138097  307651 ops.go:34] apiserver oom_adj: -16
	I0729 13:58:35.138238  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:35.638292  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:36.138957  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:36.639212  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:37.138414  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:37.638750  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:38.138853  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:38.638538  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:39.139170  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:39.638367  307651 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:58:39.895864  307651 kubeadm.go:1113] duration metric: took 4.975879616s to wait for elevateKubeSystemPrivileges
	I0729 13:58:39.895895  307651 kubeadm.go:394] duration metric: took 16.018695509s to StartCluster
	I0729 13:58:39.895915  307651 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:58:39.895995  307651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:58:39.897660  307651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:58:39.897911  307651 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 13:58:39.897948  307651 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:58:39.898034  307651 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:58:39.898121  307651 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-615666"
	I0729 13:58:39.898164  307651 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-615666"
	I0729 13:58:39.898164  307651 addons.go:69] Setting default-storageclass=true in profile "newest-cni-615666"
	I0729 13:58:39.898205  307651 host.go:66] Checking if "newest-cni-615666" exists ...
	I0729 13:58:39.898215  307651 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-615666"
	I0729 13:58:39.898171  307651 config.go:182] Loaded profile config "newest-cni-615666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:58:39.898602  307651 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:58:39.898617  307651 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:58:39.898654  307651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:58:39.898763  307651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:58:39.899536  307651 out.go:177] * Verifying Kubernetes components...
	I0729 13:58:39.900949  307651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:58:39.914645  307651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36891
	I0729 13:58:39.914917  307651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0729 13:58:39.915197  307651 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:58:39.915402  307651 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:58:39.915745  307651 main.go:141] libmachine: Using API Version  1
	I0729 13:58:39.915767  307651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:58:39.916153  307651 main.go:141] libmachine: Using API Version  1
	I0729 13:58:39.916171  307651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:58:39.916228  307651 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:58:39.916754  307651 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:58:39.916813  307651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:58:39.917142  307651 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:58:39.917382  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetState
	I0729 13:58:39.923378  307651 addons.go:234] Setting addon default-storageclass=true in "newest-cni-615666"
	I0729 13:58:39.923424  307651 host.go:66] Checking if "newest-cni-615666" exists ...
	I0729 13:58:39.923786  307651 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:58:39.923830  307651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:58:39.933509  307651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0729 13:58:39.933931  307651 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:58:39.934349  307651 main.go:141] libmachine: Using API Version  1
	I0729 13:58:39.934375  307651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:58:39.934668  307651 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:58:39.934860  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetState
	I0729 13:58:39.936575  307651 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:39.938760  307651 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:58:39.939334  307651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33721
	I0729 13:58:39.939778  307651 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:58:39.940196  307651 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:58:39.940216  307651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:58:39.940235  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:39.940314  307651 main.go:141] libmachine: Using API Version  1
	I0729 13:58:39.940331  307651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:58:39.940676  307651 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:58:39.941250  307651 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:58:39.941286  307651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:58:39.943606  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:39.944109  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:39.944131  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:39.944318  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:39.944470  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:39.944657  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:39.944789  307651 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa Username:docker}
	I0729 13:58:39.956156  307651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42023
	I0729 13:58:39.956587  307651 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:58:39.957048  307651 main.go:141] libmachine: Using API Version  1
	I0729 13:58:39.957066  307651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:58:39.957363  307651 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:58:39.957519  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetState
	I0729 13:58:39.959074  307651 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:39.959283  307651 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:58:39.959298  307651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:58:39.959312  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:58:39.961984  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:39.962351  307651 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:58:39.962381  307651 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:39.962540  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:58:39.962698  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:58:39.962828  307651 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:58:39.962944  307651 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa Username:docker}
	I0729 13:58:40.277020  307651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:58:40.277069  307651 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 13:58:40.351412  307651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:58:40.418200  307651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:58:41.118397  307651 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 13:58:41.120140  307651 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:58:41.120203  307651 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:58:41.425717  307651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.007474638s)
	I0729 13:58:41.425782  307651 main.go:141] libmachine: Making call to close driver server
	I0729 13:58:41.425797  307651 main.go:141] libmachine: (newest-cni-615666) Calling .Close
	I0729 13:58:41.425804  307651 api_server.go:72] duration metric: took 1.527810866s to wait for apiserver process to appear ...
	I0729 13:58:41.425831  307651 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:58:41.425854  307651 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8443/healthz ...
	I0729 13:58:41.425852  307651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.074403796s)
	I0729 13:58:41.425887  307651 main.go:141] libmachine: Making call to close driver server
	I0729 13:58:41.425898  307651 main.go:141] libmachine: (newest-cni-615666) Calling .Close
	I0729 13:58:41.426157  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Closing plugin on server side
	I0729 13:58:41.426187  307651 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:58:41.426200  307651 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:58:41.426201  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Closing plugin on server side
	I0729 13:58:41.426209  307651 main.go:141] libmachine: Making call to close driver server
	I0729 13:58:41.426217  307651 main.go:141] libmachine: (newest-cni-615666) Calling .Close
	I0729 13:58:41.426220  307651 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:58:41.426240  307651 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:58:41.426249  307651 main.go:141] libmachine: Making call to close driver server
	I0729 13:58:41.426255  307651 main.go:141] libmachine: (newest-cni-615666) Calling .Close
	I0729 13:58:41.426433  307651 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:58:41.426448  307651 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:58:41.427181  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Closing plugin on server side
	I0729 13:58:41.427265  307651 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:58:41.427288  307651 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:58:41.442235  307651 api_server.go:279] https://192.168.39.244:8443/healthz returned 200:
	ok
	I0729 13:58:41.447412  307651 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:58:41.447442  307651 api_server.go:131] duration metric: took 21.603022ms to wait for apiserver health ...
	I0729 13:58:41.447453  307651 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:58:41.451300  307651 main.go:141] libmachine: Making call to close driver server
	I0729 13:58:41.451334  307651 main.go:141] libmachine: (newest-cni-615666) Calling .Close
	I0729 13:58:41.451620  307651 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:58:41.451646  307651 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:58:41.451649  307651 main.go:141] libmachine: (newest-cni-615666) DBG | Closing plugin on server side
	I0729 13:58:41.453381  307651 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 13:58:41.454836  307651 addons.go:510] duration metric: took 1.556800513s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 13:58:41.471673  307651 system_pods.go:59] 8 kube-system pods found
	I0729 13:58:41.471782  307651 system_pods.go:61] "coredns-5cfdc65f69-qwdtc" [5d1cbd49-eac3-4299-ab3f-3de8a82f4cc2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:58:41.471802  307651 system_pods.go:61] "coredns-5cfdc65f69-wd952" [0878215d-4d2b-432a-9f33-4769a39eb2a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:58:41.471819  307651 system_pods.go:61] "etcd-newest-cni-615666" [e0f1c19a-357a-4e24-9c97-ae8645dd0d32] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:58:41.471832  307651 system_pods.go:61] "kube-apiserver-newest-cni-615666" [701b6840-ba78-46cd-9145-8d240e1adde1] Running
	I0729 13:58:41.471842  307651 system_pods.go:61] "kube-controller-manager-newest-cni-615666" [2aa25ca8-de55-4c02-9dd9-2f8438558564] Running
	I0729 13:58:41.471852  307651 system_pods.go:61] "kube-proxy-bk2pb" [183fbef1-da47-44b9-9b7c-196b26903ff2] Running
	I0729 13:58:41.471871  307651 system_pods.go:61] "kube-scheduler-newest-cni-615666" [6d8c192c-9755-42fb-beb4-3a3b9b6755e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:58:41.471881  307651 system_pods.go:61] "storage-provisioner" [acf4eb36-8b1d-49b0-8740-38e781353805] Pending
	I0729 13:58:41.471890  307651 system_pods.go:74] duration metric: took 24.430181ms to wait for pod list to return data ...
	I0729 13:58:41.471900  307651 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:58:41.482680  307651 default_sa.go:45] found service account: "default"
	I0729 13:58:41.482705  307651 default_sa.go:55] duration metric: took 10.79722ms for default service account to be created ...
	I0729 13:58:41.482719  307651 kubeadm.go:582] duration metric: took 1.584731007s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 13:58:41.482737  307651 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:58:41.491403  307651 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:58:41.491438  307651 node_conditions.go:123] node cpu capacity is 2
	I0729 13:58:41.491454  307651 node_conditions.go:105] duration metric: took 8.7099ms to run NodePressure ...
	I0729 13:58:41.491469  307651 start.go:241] waiting for startup goroutines ...
	I0729 13:58:41.624417  307651 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-615666" context rescaled to 1 replicas
	I0729 13:58:41.624462  307651 start.go:246] waiting for cluster config update ...
	I0729 13:58:41.624477  307651 start.go:255] writing updated cluster config ...
	I0729 13:58:41.624786  307651 ssh_runner.go:195] Run: rm -f paused
	I0729 13:58:41.682344  307651 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 13:58:41.684299  307651 out.go:177] * Done! kubectl is now configured to use "newest-cni-615666" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.179933840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261522179911670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=211dc3a2-319a-4b67-9fde-463ff48fa89d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.180675202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e454be6c-3b85-44cd-91e6-19a4489fca01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.180747676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e454be6c-3b85-44cd-91e6-19a4489fca01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.181057188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6fa41d01a3bbd092a84b63fdd76e0e4a9cfcb8095cff9783d4dda551a0cd697,PodSandboxId:14fb7737457697938a85cf65bd9088bca53d0c84788afab923b48bcc11202337,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260339582699946,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9da5631b-2e6f-49af-a4d1-47b2bc69778b,},Annotations:map[string]string{io.kubernetes.container.hash: b9f66443,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1,PodSandboxId:803e367761b0f6026783d3479b77e316b28799e211080c73d25a279f5de77ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260335567786994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rgh5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7276884-67e0-41fc-af75-2f8ba96e4c52,},Annotations:map[string]string{io.kubernetes.container.hash: c4df8208,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260328447934116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1,PodSandboxId:239ae06cab4adc1b1a940b99680def754f4049f3578cad5cd5c91761c926e9a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260327699264840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn8bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199ef7b-b5ff-4051-a
bf7-eda86a891508,},Annotations:map[string]string{io.kubernetes.container.hash: d38fafe2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260327703364173,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b
40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee,PodSandboxId:0514a8f6f2fadd61c3da0fe930a0524ba384511d750ad68b61875093716859db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260324104011387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9afbefcfde49d6
4377d69e47d176392f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879,PodSandboxId:51546de0b77e669b3811cdd82ad8ef954886a76dccbdc7c465277ed4b8bec051,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260324105949569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e94c9e43c85bb55d6d45111d97033f81,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a4bd2e15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046,PodSandboxId:741f798f6f138e14978e159f4df096c1682e9eefd26acd95c73f6e45ee08117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260324094702345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95268b823ab24992388d3d2e5120ca4e,},Annotations:map[string]string{io.
kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679,PodSandboxId:c2c437ffdf74016d1129eafa01adade139da217a95d24b78ae3170a5c9c4e0ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260324090830620,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b774d6042fd8fc65469594fd0dce96,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 160782ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e454be6c-3b85-44cd-91e6-19a4489fca01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.224687229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51b6b5f8-8730-44f9-8b05-8135677dc840 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.224779345Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51b6b5f8-8730-44f9-8b05-8135677dc840 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.226257112Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87e60bf6-0eb1-4af8-b98a-7895a6573c1f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.226662315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261522226640332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87e60bf6-0eb1-4af8-b98a-7895a6573c1f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.227218325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29f0c23f-23a4-47c9-9517-575e48807363 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.227269211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29f0c23f-23a4-47c9-9517-575e48807363 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.227451638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6fa41d01a3bbd092a84b63fdd76e0e4a9cfcb8095cff9783d4dda551a0cd697,PodSandboxId:14fb7737457697938a85cf65bd9088bca53d0c84788afab923b48bcc11202337,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260339582699946,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9da5631b-2e6f-49af-a4d1-47b2bc69778b,},Annotations:map[string]string{io.kubernetes.container.hash: b9f66443,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1,PodSandboxId:803e367761b0f6026783d3479b77e316b28799e211080c73d25a279f5de77ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260335567786994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rgh5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7276884-67e0-41fc-af75-2f8ba96e4c52,},Annotations:map[string]string{io.kubernetes.container.hash: c4df8208,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260328447934116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1,PodSandboxId:239ae06cab4adc1b1a940b99680def754f4049f3578cad5cd5c91761c926e9a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260327699264840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn8bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199ef7b-b5ff-4051-a
bf7-eda86a891508,},Annotations:map[string]string{io.kubernetes.container.hash: d38fafe2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260327703364173,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b
40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee,PodSandboxId:0514a8f6f2fadd61c3da0fe930a0524ba384511d750ad68b61875093716859db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260324104011387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9afbefcfde49d6
4377d69e47d176392f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879,PodSandboxId:51546de0b77e669b3811cdd82ad8ef954886a76dccbdc7c465277ed4b8bec051,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260324105949569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e94c9e43c85bb55d6d45111d97033f81,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a4bd2e15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046,PodSandboxId:741f798f6f138e14978e159f4df096c1682e9eefd26acd95c73f6e45ee08117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260324094702345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95268b823ab24992388d3d2e5120ca4e,},Annotations:map[string]string{io.
kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679,PodSandboxId:c2c437ffdf74016d1129eafa01adade139da217a95d24b78ae3170a5c9c4e0ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260324090830620,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b774d6042fd8fc65469594fd0dce96,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 160782ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29f0c23f-23a4-47c9-9517-575e48807363 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.269892904Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5cb4bdb-db3e-4943-bfc9-5bfdedabb73f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.269967244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5cb4bdb-db3e-4943-bfc9-5bfdedabb73f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.271381288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8dfaef67-ac2a-4558-b554-03c1d60f223a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.271859795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261522271831254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8dfaef67-ac2a-4558-b554-03c1d60f223a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.272516873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d542cf50-c7ac-4cbf-9eb5-711411aa5e9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.272587850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d542cf50-c7ac-4cbf-9eb5-711411aa5e9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.272791692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6fa41d01a3bbd092a84b63fdd76e0e4a9cfcb8095cff9783d4dda551a0cd697,PodSandboxId:14fb7737457697938a85cf65bd9088bca53d0c84788afab923b48bcc11202337,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260339582699946,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9da5631b-2e6f-49af-a4d1-47b2bc69778b,},Annotations:map[string]string{io.kubernetes.container.hash: b9f66443,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1,PodSandboxId:803e367761b0f6026783d3479b77e316b28799e211080c73d25a279f5de77ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260335567786994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rgh5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7276884-67e0-41fc-af75-2f8ba96e4c52,},Annotations:map[string]string{io.kubernetes.container.hash: c4df8208,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260328447934116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1,PodSandboxId:239ae06cab4adc1b1a940b99680def754f4049f3578cad5cd5c91761c926e9a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260327699264840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn8bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199ef7b-b5ff-4051-a
bf7-eda86a891508,},Annotations:map[string]string{io.kubernetes.container.hash: d38fafe2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260327703364173,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b
40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee,PodSandboxId:0514a8f6f2fadd61c3da0fe930a0524ba384511d750ad68b61875093716859db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260324104011387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9afbefcfde49d6
4377d69e47d176392f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879,PodSandboxId:51546de0b77e669b3811cdd82ad8ef954886a76dccbdc7c465277ed4b8bec051,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260324105949569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e94c9e43c85bb55d6d45111d97033f81,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a4bd2e15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046,PodSandboxId:741f798f6f138e14978e159f4df096c1682e9eefd26acd95c73f6e45ee08117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260324094702345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95268b823ab24992388d3d2e5120ca4e,},Annotations:map[string]string{io.
kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679,PodSandboxId:c2c437ffdf74016d1129eafa01adade139da217a95d24b78ae3170a5c9c4e0ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260324090830620,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b774d6042fd8fc65469594fd0dce96,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 160782ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d542cf50-c7ac-4cbf-9eb5-711411aa5e9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.307422601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee1df3b6-8f61-4e4d-9f28-a95eb167394d name=/runtime.v1.RuntimeService/Version
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.307557402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee1df3b6-8f61-4e4d-9f28-a95eb167394d name=/runtime.v1.RuntimeService/Version
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.308741877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de4e78b0-8be9-42b2-a29d-ba2f486e8049 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.309180623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261522309088475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de4e78b0-8be9-42b2-a29d-ba2f486e8049 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.309762231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=deaf3396-7ef4-4ed1-bfe0-cea056b77a61 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.309813531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=deaf3396-7ef4-4ed1-bfe0-cea056b77a61 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:58:42 embed-certs-135920 crio[733]: time="2024-07-29 13:58:42.310027145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6fa41d01a3bbd092a84b63fdd76e0e4a9cfcb8095cff9783d4dda551a0cd697,PodSandboxId:14fb7737457697938a85cf65bd9088bca53d0c84788afab923b48bcc11202337,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722260339582699946,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9da5631b-2e6f-49af-a4d1-47b2bc69778b,},Annotations:map[string]string{io.kubernetes.container.hash: b9f66443,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1,PodSandboxId:803e367761b0f6026783d3479b77e316b28799e211080c73d25a279f5de77ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260335567786994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rgh5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7276884-67e0-41fc-af75-2f8ba96e4c52,},Annotations:map[string]string{io.kubernetes.container.hash: c4df8208,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260328447934116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1,PodSandboxId:239ae06cab4adc1b1a940b99680def754f4049f3578cad5cd5c91761c926e9a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260327699264840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn8bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199ef7b-b5ff-4051-a
bf7-eda86a891508,},Annotations:map[string]string{io.kubernetes.container.hash: d38fafe2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6,PodSandboxId:1325c9477fc3dbc8d98d86d32d0f9dd0366c7d444e0be1e053bd5356572f8cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260327703364173,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 420625d8-a8f2-4ca4-90b0-7090c079b
40e,},Annotations:map[string]string{io.kubernetes.container.hash: 15b4a64c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee,PodSandboxId:0514a8f6f2fadd61c3da0fe930a0524ba384511d750ad68b61875093716859db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260324104011387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9afbefcfde49d6
4377d69e47d176392f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879,PodSandboxId:51546de0b77e669b3811cdd82ad8ef954886a76dccbdc7c465277ed4b8bec051,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260324105949569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e94c9e43c85bb55d6d45111d97033f81,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a4bd2e15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046,PodSandboxId:741f798f6f138e14978e159f4df096c1682e9eefd26acd95c73f6e45ee08117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260324094702345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95268b823ab24992388d3d2e5120ca4e,},Annotations:map[string]string{io.
kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679,PodSandboxId:c2c437ffdf74016d1129eafa01adade139da217a95d24b78ae3170a5c9c4e0ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260324090830620,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-135920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b774d6042fd8fc65469594fd0dce96,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 160782ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=deaf3396-7ef4-4ed1-bfe0-cea056b77a61 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d6fa41d01a3bb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   14fb773745769       busybox
	77e0f82421c5b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   803e367761b0f       coredns-7db6d8ff4d-rgh5d
	197f6e7a6144c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   1325c9477fc3d       storage-provisioner
	5b08d92f67be8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   1325c9477fc3d       storage-provisioner
	646e0d1187d7e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      19 minutes ago      Running             kube-proxy                1                   239ae06cab4ad       kube-proxy-sn8bc
	7ed77a408cabd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      1                   51546de0b77e6       etcd-embed-certs-135920
	d0bbe9cda62b6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      19 minutes ago      Running             kube-controller-manager   1                   0514a8f6f2fad       kube-controller-manager-embed-certs-135920
	ed231f7f456e5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      19 minutes ago      Running             kube-scheduler            1                   741f798f6f138       kube-scheduler-embed-certs-135920
	ac9187ea50de2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      19 minutes ago      Running             kube-apiserver            1                   c2c437ffdf740       kube-apiserver-embed-certs-135920
	
	
	==> coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58595 - 46810 "HINFO IN 5845440276659678672.3557346812183137599. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009280019s
	
	
	==> describe nodes <==
	Name:               embed-certs-135920
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-135920
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=embed-certs-135920
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_29_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:29:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-135920
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:58:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:54:36 +0000   Mon, 29 Jul 2024 13:29:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:54:36 +0000   Mon, 29 Jul 2024 13:29:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:54:36 +0000   Mon, 29 Jul 2024 13:29:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:54:36 +0000   Mon, 29 Jul 2024 13:38:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.207
	  Hostname:    embed-certs-135920
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7562c7425a849ffb5070c9c7a0b2768
	  System UUID:                c7562c74-25a8-49ff-b507-0c9c7a0b2768
	  Boot ID:                    f4437f0d-14d4-4e88-8962-a92f1b148565
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-rgh5d                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-135920                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-135920             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-135920    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-sn8bc                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-135920             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-nzn76               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-135920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-135920 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-135920 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-135920 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-135920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-135920 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-135920 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-135920 event: Registered Node embed-certs-135920 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-135920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-135920 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-135920 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-135920 event: Registered Node embed-certs-135920 in Controller
	
	
	==> dmesg <==
	[Jul29 13:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052072] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042303] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.164186] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.618090] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.387412] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.314639] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.063680] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058131] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.186646] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.116279] systemd-fstab-generator[689]: Ignoring "noauto" option for root device
	[  +0.312358] systemd-fstab-generator[718]: Ignoring "noauto" option for root device
	[  +4.385553] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +0.061727] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.907188] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +4.595559] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.476029] systemd-fstab-generator[1580]: Ignoring "noauto" option for root device
	[  +3.263612] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.297093] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] <==
	{"level":"info","ts":"2024-07-29T13:38:44.936287Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f6357764450262","local-member-id":"e55d95d7437bec44","added-peer-id":"e55d95d7437bec44","added-peer-peer-urls":["https://192.168.72.207:2380"]}
	{"level":"info","ts":"2024-07-29T13:38:44.917328Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.207:2380"}
	{"level":"info","ts":"2024-07-29T13:38:44.939312Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.207:2380"}
	{"level":"info","ts":"2024-07-29T13:38:44.93953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f6357764450262","local-member-id":"e55d95d7437bec44","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:38:44.939633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:38:45.82596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T13:38:45.826012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T13:38:45.826049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 received MsgPreVoteResp from e55d95d7437bec44 at term 2"}
	{"level":"info","ts":"2024-07-29T13:38:45.826062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T13:38:45.826068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 received MsgVoteResp from e55d95d7437bec44 at term 3"}
	{"level":"info","ts":"2024-07-29T13:38:45.826076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e55d95d7437bec44 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T13:38:45.826086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e55d95d7437bec44 elected leader e55d95d7437bec44 at term 3"}
	{"level":"info","ts":"2024-07-29T13:38:45.828533Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e55d95d7437bec44","local-member-attributes":"{Name:embed-certs-135920 ClientURLs:[https://192.168.72.207:2379]}","request-path":"/0/members/e55d95d7437bec44/attributes","cluster-id":"6f6357764450262","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:38:45.828592Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:38:45.828933Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:38:45.828964Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T13:38:45.829089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:38:45.830938Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T13:38:45.831678Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.207:2379"}
	{"level":"info","ts":"2024-07-29T13:48:45.859562Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":872}
	{"level":"info","ts":"2024-07-29T13:48:45.869775Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":872,"took":"9.35731ms","hash":351501222,"current-db-size-bytes":2592768,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2592768,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-29T13:48:45.869868Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":351501222,"revision":872,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T13:53:45.86784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1114}
	{"level":"info","ts":"2024-07-29T13:53:45.872446Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1114,"took":"3.986058ms","hash":1892822859,"current-db-size-bytes":2592768,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T13:53:45.872532Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1892822859,"revision":1114,"compact-revision":872}
	
	
	==> kernel <==
	 13:58:42 up 20 min,  0 users,  load average: 0.41, 0.28, 0.18
	Linux embed-certs-135920 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] <==
	I0729 13:51:48.204604       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:53:47.203834       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:53:47.203934       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 13:53:48.204889       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:53:48.205068       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:53:48.205171       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:53:48.205013       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:53:48.205272       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:53:48.207274       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:54:48.205684       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:54:48.205837       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:54:48.205865       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:54:48.207975       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:54:48.208018       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:54:48.208027       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:56:48.206565       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:56:48.206656       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:56:48.206666       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:56:48.208991       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:56:48.209074       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:56:48.209081       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] <==
	I0729 13:53:00.697521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:53:30.178028       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:53:30.708076       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:54:00.183457       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:54:00.715948       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:54:30.187744       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:54:30.723571       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:55:00.193265       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:55:00.731809       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 13:55:16.365684       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="354.027µs"
	E0729 13:55:30.197997       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:55:30.365283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="231.58µs"
	I0729 13:55:30.743832       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:56:00.203001       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:56:00.751595       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:56:30.207748       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:56:30.759243       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:57:00.212943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:57:00.767349       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:57:30.218246       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:57:30.776076       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:58:00.224415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:58:00.784027       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:58:30.229670       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:58:30.792619       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] <==
	I0729 13:38:47.869659       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:38:47.880213       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.207"]
	I0729 13:38:47.921513       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:38:47.921547       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:38:47.921569       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:38:47.924176       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:38:47.924429       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:38:47.924619       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:38:47.925810       1 config.go:192] "Starting service config controller"
	I0729 13:38:47.925886       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:38:47.925968       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:38:47.925989       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:38:47.926754       1 config.go:319] "Starting node config controller"
	I0729 13:38:47.927772       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:38:48.026219       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:38:48.026311       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:38:48.028137       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] <==
	I0729 13:38:45.320257       1 serving.go:380] Generated self-signed cert in-memory
	W0729 13:38:47.145407       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 13:38:47.145497       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 13:38:47.145508       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 13:38:47.145514       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 13:38:47.200368       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 13:38:47.200462       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:38:47.207810       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 13:38:47.211706       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 13:38:47.211748       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 13:38:47.211768       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 13:38:47.312472       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:55:43 embed-certs-135920 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:55:43 embed-certs-135920 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:55:54 embed-certs-135920 kubelet[944]: E0729 13:55:54.350533     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:56:09 embed-certs-135920 kubelet[944]: E0729 13:56:09.349479     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:56:23 embed-certs-135920 kubelet[944]: E0729 13:56:23.348805     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:56:37 embed-certs-135920 kubelet[944]: E0729 13:56:37.350338     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:56:43 embed-certs-135920 kubelet[944]: E0729 13:56:43.364621     944 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:56:43 embed-certs-135920 kubelet[944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:56:43 embed-certs-135920 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:56:43 embed-certs-135920 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:56:43 embed-certs-135920 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:56:49 embed-certs-135920 kubelet[944]: E0729 13:56:49.350382     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:57:04 embed-certs-135920 kubelet[944]: E0729 13:57:04.348333     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:57:16 embed-certs-135920 kubelet[944]: E0729 13:57:16.348982     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:57:30 embed-certs-135920 kubelet[944]: E0729 13:57:30.349159     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:57:43 embed-certs-135920 kubelet[944]: E0729 13:57:43.350711     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:57:43 embed-certs-135920 kubelet[944]: E0729 13:57:43.366441     944 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:57:43 embed-certs-135920 kubelet[944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:57:43 embed-certs-135920 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:57:43 embed-certs-135920 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:57:43 embed-certs-135920 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:57:58 embed-certs-135920 kubelet[944]: E0729 13:57:58.349009     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:58:13 embed-certs-135920 kubelet[944]: E0729 13:58:13.350022     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:58:24 embed-certs-135920 kubelet[944]: E0729 13:58:24.349686     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	Jul 29 13:58:36 embed-certs-135920 kubelet[944]: E0729 13:58:36.348506     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-nzn76" podUID="4ce279ad-65aa-47ce-9cb2-9a964d26950c"
	
	
	==> storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] <==
	I0729 13:38:48.562587       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 13:38:48.582550       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 13:38:48.582824       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 13:39:05.981368       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 13:39:05.981605       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-135920_95a59b64-6ebe-48ac-9681-e2fb4ef0b1e1!
	I0729 13:39:05.985345       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a509313c-4b5c-4823-a2c1-a8b580d2e8ee", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-135920_95a59b64-6ebe-48ac-9681-e2fb4ef0b1e1 became leader
	I0729 13:39:06.082289       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-135920_95a59b64-6ebe-48ac-9681-e2fb4ef0b1e1!
	
	
	==> storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] <==
	I0729 13:38:47.808195       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 13:38:47.814844       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-135920 -n embed-certs-135920
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-135920 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-nzn76
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-135920 describe pod metrics-server-569cc877fc-nzn76
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-135920 describe pod metrics-server-569cc877fc-nzn76: exit status 1 (61.766244ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-nzn76" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-135920 describe pod metrics-server-569cc877fc-nzn76: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (384.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (415.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 13:59:13.137987917 +0000 UTC m=+7031.401342729
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-972693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-972693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.704µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-972693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-972693 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-972693 logs -n 25: (1.300663478s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:30 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-135920            | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-566777             | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-972693  | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-135920                 | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-566777                  | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-924039        | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-566777 --memory=2200                     | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-972693       | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC | 29 Jul 24 13:43 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-924039             | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:57 UTC | 29 Jul 24 13:57 UTC |
	| start   | -p newest-cni-615666 --memory=2200 --alsologtostderr   | newest-cni-615666            | jenkins | v1.33.1 | 29 Jul 24 13:57 UTC | 29 Jul 24 13:58 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:57 UTC | 29 Jul 24 13:57 UTC |
	| addons  | enable metrics-server -p newest-cni-615666             | newest-cni-615666            | jenkins | v1.33.1 | 29 Jul 24 13:58 UTC | 29 Jul 24 13:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-615666                                   | newest-cni-615666            | jenkins | v1.33.1 | 29 Jul 24 13:58 UTC | 29 Jul 24 13:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:58 UTC | 29 Jul 24 13:58 UTC |
	| addons  | enable dashboard -p newest-cni-615666                  | newest-cni-615666            | jenkins | v1.33.1 | 29 Jul 24 13:58 UTC | 29 Jul 24 13:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-615666 --memory=2200 --alsologtostderr   | newest-cni-615666            | jenkins | v1.33.1 | 29 Jul 24 13:58 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:58:49
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:58:49.969310  308571 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:58:49.969436  308571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:58:49.969445  308571 out.go:304] Setting ErrFile to fd 2...
	I0729 13:58:49.969449  308571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:58:49.969701  308571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:58:49.970262  308571 out.go:298] Setting JSON to false
	I0729 13:58:49.971221  308571 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13273,"bootTime":1722248257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:58:49.971290  308571 start.go:139] virtualization: kvm guest
	I0729 13:58:49.973491  308571 out.go:177] * [newest-cni-615666] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:58:49.974783  308571 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:58:49.974800  308571 notify.go:220] Checking for updates...
	I0729 13:58:49.977296  308571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:58:49.978557  308571 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:58:49.979854  308571 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:58:49.981034  308571 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:58:49.982335  308571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:58:49.984047  308571 config.go:182] Loaded profile config "newest-cni-615666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:58:49.984566  308571 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:58:49.984631  308571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:58:49.999528  308571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0729 13:58:49.999945  308571 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:58:50.000474  308571 main.go:141] libmachine: Using API Version  1
	I0729 13:58:50.000496  308571 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:58:50.000835  308571 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:58:50.001036  308571 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:50.001275  308571 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:58:50.001548  308571 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:58:50.001586  308571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:58:50.018063  308571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0729 13:58:50.018464  308571 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:58:50.018951  308571 main.go:141] libmachine: Using API Version  1
	I0729 13:58:50.018978  308571 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:58:50.019306  308571 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:58:50.019492  308571 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:50.054311  308571 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:58:50.055679  308571 start.go:297] selected driver: kvm2
	I0729 13:58:50.055700  308571 start.go:901] validating driver "kvm2" against &{Name:newest-cni-615666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-615666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:58:50.055859  308571 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:58:50.056955  308571 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:58:50.057055  308571 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:58:50.074064  308571 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:58:50.074486  308571 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 13:58:50.074519  308571 cni.go:84] Creating CNI manager for ""
	I0729 13:58:50.074532  308571 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:58:50.074576  308571 start.go:340] cluster config:
	{Name:newest-cni-615666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-615666 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:58:50.074681  308571 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:58:50.076248  308571 out.go:177] * Starting "newest-cni-615666" primary control-plane node in "newest-cni-615666" cluster
	I0729 13:58:50.077470  308571 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 13:58:50.077513  308571 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:58:50.077523  308571 cache.go:56] Caching tarball of preloaded images
	I0729 13:58:50.077596  308571 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:58:50.077607  308571 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 13:58:50.077712  308571 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/config.json ...
	I0729 13:58:50.077889  308571 start.go:360] acquireMachinesLock for newest-cni-615666: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:58:50.077929  308571 start.go:364] duration metric: took 22.755µs to acquireMachinesLock for "newest-cni-615666"
	I0729 13:58:50.077943  308571 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:58:50.077959  308571 fix.go:54] fixHost starting: 
	I0729 13:58:50.078206  308571 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:58:50.078234  308571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:58:50.092769  308571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41795
	I0729 13:58:50.093190  308571 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:58:50.093668  308571 main.go:141] libmachine: Using API Version  1
	I0729 13:58:50.093690  308571 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:58:50.094053  308571 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:58:50.094274  308571 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:58:50.094447  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetState
	I0729 13:58:50.095980  308571 fix.go:112] recreateIfNeeded on newest-cni-615666: state=Stopped err=<nil>
	I0729 13:58:50.096021  308571 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	W0729 13:58:50.096175  308571 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:58:50.098181  308571 out.go:177] * Restarting existing kvm2 VM for "newest-cni-615666" ...
	I0729 13:58:50.099644  308571 main.go:141] libmachine: (newest-cni-615666) Calling .Start
	I0729 13:58:50.099865  308571 main.go:141] libmachine: (newest-cni-615666) Ensuring networks are active...
	I0729 13:58:50.100632  308571 main.go:141] libmachine: (newest-cni-615666) Ensuring network default is active
	I0729 13:58:50.101049  308571 main.go:141] libmachine: (newest-cni-615666) Ensuring network mk-newest-cni-615666 is active
	I0729 13:58:50.101477  308571 main.go:141] libmachine: (newest-cni-615666) Getting domain xml...
	I0729 13:58:50.102309  308571 main.go:141] libmachine: (newest-cni-615666) Creating domain...
	I0729 13:58:51.335576  308571 main.go:141] libmachine: (newest-cni-615666) Waiting to get IP...
	I0729 13:58:51.336512  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:51.336944  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:51.337016  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:51.336928  308606 retry.go:31] will retry after 276.154427ms: waiting for machine to come up
	I0729 13:58:51.614379  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:51.614972  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:51.615000  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:51.614915  308606 retry.go:31] will retry after 252.388015ms: waiting for machine to come up
	I0729 13:58:51.869479  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:51.869922  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:51.869950  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:51.869881  308606 retry.go:31] will retry after 326.593105ms: waiting for machine to come up
	I0729 13:58:52.198333  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:52.198766  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:52.198799  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:52.198711  308606 retry.go:31] will retry after 538.616899ms: waiting for machine to come up
	I0729 13:58:52.739009  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:52.739422  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:52.739452  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:52.739372  308606 retry.go:31] will retry after 684.307154ms: waiting for machine to come up
	I0729 13:58:53.425423  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:53.425951  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:53.425976  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:53.425904  308606 retry.go:31] will retry after 741.58237ms: waiting for machine to come up
	I0729 13:58:54.168866  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:54.169272  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:54.169310  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:54.169245  308606 retry.go:31] will retry after 866.166259ms: waiting for machine to come up
	I0729 13:58:55.037188  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:55.037684  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:55.037706  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:55.037644  308606 retry.go:31] will retry after 1.028726906s: waiting for machine to come up
	I0729 13:58:56.068166  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:56.068723  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:56.068750  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:56.068679  308606 retry.go:31] will retry after 1.490719214s: waiting for machine to come up
	I0729 13:58:57.561282  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:57.561716  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:57.561746  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:57.561661  308606 retry.go:31] will retry after 1.527882241s: waiting for machine to come up
	I0729 13:58:59.090801  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:58:59.091361  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:58:59.091400  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:58:59.091288  308606 retry.go:31] will retry after 2.894549177s: waiting for machine to come up
	I0729 13:59:01.986869  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:01.987318  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:59:01.987348  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:59:01.987255  308606 retry.go:31] will retry after 2.468196736s: waiting for machine to come up
	I0729 13:59:04.457909  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:04.458312  308571 main.go:141] libmachine: (newest-cni-615666) DBG | unable to find current IP address of domain newest-cni-615666 in network mk-newest-cni-615666
	I0729 13:59:04.458353  308571 main.go:141] libmachine: (newest-cni-615666) DBG | I0729 13:59:04.458283  308606 retry.go:31] will retry after 3.463192222s: waiting for machine to come up
	I0729 13:59:07.925329  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:07.925818  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has current primary IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:07.925839  308571 main.go:141] libmachine: (newest-cni-615666) Found IP for machine: 192.168.39.244
	I0729 13:59:07.925848  308571 main.go:141] libmachine: (newest-cni-615666) Reserving static IP address...
	I0729 13:59:07.926304  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "newest-cni-615666", mac: "52:54:00:1a:dc:f2", ip: "192.168.39.244"} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:07.926334  308571 main.go:141] libmachine: (newest-cni-615666) Reserved static IP address: 192.168.39.244
	I0729 13:59:07.926355  308571 main.go:141] libmachine: (newest-cni-615666) DBG | skip adding static IP to network mk-newest-cni-615666 - found existing host DHCP lease matching {name: "newest-cni-615666", mac: "52:54:00:1a:dc:f2", ip: "192.168.39.244"}
	I0729 13:59:07.926374  308571 main.go:141] libmachine: (newest-cni-615666) DBG | Getting to WaitForSSH function...
	I0729 13:59:07.926390  308571 main.go:141] libmachine: (newest-cni-615666) Waiting for SSH to be available...
	I0729 13:59:07.928411  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:07.928700  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:07.928738  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:07.928860  308571 main.go:141] libmachine: (newest-cni-615666) DBG | Using SSH client type: external
	I0729 13:59:07.928893  308571 main.go:141] libmachine: (newest-cni-615666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa (-rw-------)
	I0729 13:59:07.928928  308571 main.go:141] libmachine: (newest-cni-615666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:59:07.928939  308571 main.go:141] libmachine: (newest-cni-615666) DBG | About to run SSH command:
	I0729 13:59:07.928954  308571 main.go:141] libmachine: (newest-cni-615666) DBG | exit 0
	I0729 13:59:08.049254  308571 main.go:141] libmachine: (newest-cni-615666) DBG | SSH cmd err, output: <nil>: 
	I0729 13:59:08.049627  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetConfigRaw
	I0729 13:59:08.050374  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetIP
	I0729 13:59:08.053028  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.053380  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:08.053420  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.053663  308571 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/newest-cni-615666/config.json ...
	I0729 13:59:08.053896  308571 machine.go:94] provisionDockerMachine start ...
	I0729 13:59:08.053918  308571 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:59:08.054106  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:59:08.056501  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.056884  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:08.056910  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.057081  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:59:08.057298  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:08.057476  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:08.057626  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:59:08.057779  308571 main.go:141] libmachine: Using SSH client type: native
	I0729 13:59:08.057991  308571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:59:08.058003  308571 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:59:08.161142  308571 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:59:08.161178  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetMachineName
	I0729 13:59:08.161441  308571 buildroot.go:166] provisioning hostname "newest-cni-615666"
	I0729 13:59:08.161477  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetMachineName
	I0729 13:59:08.161695  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:59:08.164373  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.164710  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:08.164729  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.164882  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:59:08.165077  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:08.165233  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:08.165403  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:59:08.165586  308571 main.go:141] libmachine: Using SSH client type: native
	I0729 13:59:08.165767  308571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:59:08.165780  308571 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-615666 && echo "newest-cni-615666" | sudo tee /etc/hostname
	I0729 13:59:08.283246  308571 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-615666
	
	I0729 13:59:08.283277  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:59:08.285879  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.286276  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:08.286320  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.286464  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:59:08.286657  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:08.286832  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:08.286987  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:59:08.287178  308571 main.go:141] libmachine: Using SSH client type: native
	I0729 13:59:08.287371  308571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:59:08.287387  308571 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-615666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-615666/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-615666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:59:08.394122  308571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:59:08.394155  308571 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:59:08.394190  308571 buildroot.go:174] setting up certificates
	I0729 13:59:08.394203  308571 provision.go:84] configureAuth start
	I0729 13:59:08.394212  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetMachineName
	I0729 13:59:08.394529  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetIP
	I0729 13:59:08.396941  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.397359  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:08.397391  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.397494  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:59:08.399840  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.400163  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:08.400199  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.400383  308571 provision.go:143] copyHostCerts
	I0729 13:59:08.400440  308571 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:59:08.400449  308571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:59:08.400514  308571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:59:08.400617  308571 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:59:08.400626  308571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:59:08.400648  308571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:59:08.400765  308571 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:59:08.400779  308571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:59:08.400828  308571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:59:08.400903  308571 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.newest-cni-615666 san=[127.0.0.1 192.168.39.244 localhost minikube newest-cni-615666]
	I0729 13:59:08.617410  308571 provision.go:177] copyRemoteCerts
	I0729 13:59:08.617484  308571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:59:08.617510  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:59:08.619953  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.620415  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:08.620444  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.620567  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:59:08.620802  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:08.620960  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:59:08.621094  308571 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa Username:docker}
	I0729 13:59:08.700817  308571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:59:08.726757  308571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:59:08.749762  308571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:59:08.773249  308571 provision.go:87] duration metric: took 379.031373ms to configureAuth
	I0729 13:59:08.773277  308571 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:59:08.773499  308571 config.go:182] Loaded profile config "newest-cni-615666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:59:08.773585  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:59:08.776211  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.776709  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:08.776739  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:08.776961  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:59:08.777214  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:08.777427  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:08.777730  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:59:08.777936  308571 main.go:141] libmachine: Using SSH client type: native
	I0729 13:59:08.778183  308571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:59:08.778207  308571 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:59:09.026426  308571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:59:09.026455  308571 machine.go:97] duration metric: took 972.544413ms to provisionDockerMachine
	I0729 13:59:09.026467  308571 start.go:293] postStartSetup for "newest-cni-615666" (driver="kvm2")
	I0729 13:59:09.026480  308571 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:59:09.026495  308571 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:59:09.026865  308571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:59:09.028504  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:59:09.031282  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:09.031636  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:09.031667  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:09.031906  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:59:09.032130  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:09.032309  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:59:09.032477  308571 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa Username:docker}
	I0729 13:59:09.111487  308571 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:59:09.115581  308571 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:59:09.115606  308571 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:59:09.115668  308571 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:59:09.115784  308571 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:59:09.115903  308571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:59:09.125310  308571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:59:09.149102  308571 start.go:296] duration metric: took 122.620514ms for postStartSetup
	I0729 13:59:09.149152  308571 fix.go:56] duration metric: took 19.071200569s for fixHost
	I0729 13:59:09.149179  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:59:09.151564  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:09.151949  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:09.151969  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:09.152189  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:59:09.152372  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:09.152568  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:09.152713  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:59:09.152919  308571 main.go:141] libmachine: Using SSH client type: native
	I0729 13:59:09.153139  308571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 13:59:09.153154  308571 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:59:09.249369  308571 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722261549.223005303
	
	I0729 13:59:09.249397  308571 fix.go:216] guest clock: 1722261549.223005303
	I0729 13:59:09.249407  308571 fix.go:229] Guest: 2024-07-29 13:59:09.223005303 +0000 UTC Remote: 2024-07-29 13:59:09.14915784 +0000 UTC m=+19.214232466 (delta=73.847463ms)
	I0729 13:59:09.249445  308571 fix.go:200] guest clock delta is within tolerance: 73.847463ms
	I0729 13:59:09.249453  308571 start.go:83] releasing machines lock for "newest-cni-615666", held for 19.171515053s
	I0729 13:59:09.249477  308571 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:59:09.249780  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetIP
	I0729 13:59:09.252277  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:09.252631  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:09.252664  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:09.252746  308571 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:59:09.253284  308571 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:59:09.253474  308571 main.go:141] libmachine: (newest-cni-615666) Calling .DriverName
	I0729 13:59:09.253575  308571 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:59:09.253634  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:59:09.253675  308571 ssh_runner.go:195] Run: cat /version.json
	I0729 13:59:09.253695  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHHostname
	I0729 13:59:09.256256  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:09.256282  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:09.256603  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:09.256637  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:09.256662  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:09.256682  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:09.256773  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:59:09.256901  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHPort
	I0729 13:59:09.256973  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:09.257052  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHKeyPath
	I0729 13:59:09.257122  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:59:09.257174  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetSSHUsername
	I0729 13:59:09.257254  308571 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa Username:docker}
	I0729 13:59:09.257329  308571 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/newest-cni-615666/id_rsa Username:docker}
	I0729 13:59:09.355938  308571 ssh_runner.go:195] Run: systemctl --version
	I0729 13:59:09.361914  308571 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:59:09.508521  308571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:59:09.515828  308571 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:59:09.515910  308571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:59:09.534534  308571 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:59:09.534560  308571 start.go:495] detecting cgroup driver to use...
	I0729 13:59:09.534630  308571 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:59:09.552547  308571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:59:09.566911  308571 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:59:09.566963  308571 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:59:09.580917  308571 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:59:09.595017  308571 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:59:09.715985  308571 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:59:09.864501  308571 docker.go:233] disabling docker service ...
	I0729 13:59:09.864574  308571 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:59:09.880061  308571 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:59:09.893574  308571 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:59:10.024697  308571 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:59:10.147354  308571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:59:10.161325  308571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:59:10.182287  308571 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 13:59:10.182360  308571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:59:10.193601  308571 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:59:10.193658  308571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:59:10.204161  308571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:59:10.214910  308571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:59:10.225634  308571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:59:10.236720  308571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:59:10.247217  308571 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:59:10.264227  308571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:59:10.274357  308571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:59:10.283540  308571 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:59:10.283605  308571 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:59:10.296279  308571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:59:10.305541  308571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:59:10.425062  308571 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:59:10.557414  308571 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:59:10.557525  308571 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:59:10.562326  308571 start.go:563] Will wait 60s for crictl version
	I0729 13:59:10.562376  308571 ssh_runner.go:195] Run: which crictl
	I0729 13:59:10.566083  308571 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:59:10.602324  308571 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:59:10.602427  308571 ssh_runner.go:195] Run: crio --version
	I0729 13:59:10.629357  308571 ssh_runner.go:195] Run: crio --version
	I0729 13:59:10.658808  308571 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 13:59:10.660322  308571 main.go:141] libmachine: (newest-cni-615666) Calling .GetIP
	I0729 13:59:10.662993  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:10.663351  308571 main.go:141] libmachine: (newest-cni-615666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:dc:f2", ip: ""} in network mk-newest-cni-615666: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:07 +0000 UTC Type:0 Mac:52:54:00:1a:dc:f2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:newest-cni-615666 Clientid:01:52:54:00:1a:dc:f2}
	I0729 13:59:10.663379  308571 main.go:141] libmachine: (newest-cni-615666) DBG | domain newest-cni-615666 has defined IP address 192.168.39.244 and MAC address 52:54:00:1a:dc:f2 in network mk-newest-cni-615666
	I0729 13:59:10.663571  308571 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:59:10.667666  308571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:59:10.682236  308571 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.770886255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261553770862203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d28d487-1826-4660-9bea-df807f5ef774 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.771595593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f7cfd46-b3de-4726-9e2b-5bf255c9f10b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.771648291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f7cfd46-b3de-4726-9e2b-5bf255c9f10b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.771836786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04ef4fd91577506a5a1e3c63dca7660c357c1b3e5088df7d2b328b6cc4cd48a,PodSandboxId:f617655f1ffc9341a2eb78456b2f31fd893c3a071d5b1d80802d58823fcc309e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260592757550758,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b577293-6827-4c76-a404-6b53739ae6e9,},Annotations:map[string]string{io.kubernetes.container.hash: b287aae0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52dcd83ed085799e110ef5addf79202e933b852380cb2c894550861772f27194,PodSandboxId:5cba4a42b70da73f69c7016235e805e2de70979f3112456592dfddb73a72437d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591904081965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t29vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4d8867-523f-4115-b3dd-76a9e2765af1,},Annotations:map[string]string{io.kubernetes.container.hash: a47ee6a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f3e11d124588260a25997609789489e01d15234bd3481eddd4d4ebca0e0b97,PodSandboxId:7e0a4668e08d707cb73b7df3a75deddc957cb80f4bd9ede5c8a988c9cd6f93c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591763675527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zlz8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aecbb6c3-53d7-4497-a26f-c41a7795681a,},Annotations:map[string]string{io.kubernetes.container.hash: 9db8795,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6deaf42164b308b808f5f016359c422290a6f03a0cbf54cfda984c1545f973b8,PodSandboxId:b7f6a66020f31c30226795125159442b8fc2cdf4c3b1c2ef5cdf203b04fbafce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722260590931971079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfsk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952c235c-310b-4f82-ba2d-fe06f3556a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a30a1c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbfcefc4958f17f430033a2802d4d325efe3b9f6181f5523ef427df49277357,PodSandboxId:c5ef647f2c6a32fde8e2679b1b55354a6cec2c43a2935898b7026bcff29a2781,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260570544915087
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d36b6bf9ef24d0d4b5362b88fa5c3794,},Annotations:map[string]string{io.kubernetes.container.hash: bf329a7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c623732bc8fb3c27e7c14152546d83951379549a45125f67fb679aa871e404dc,PodSandboxId:2fe988c66f5636c3978b8549517cae3c2cc236900974fafae586223ad5263c93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172226057054
3171555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c62fc2ec64dd484d55d238738e3faa,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4531ab2ab82393edb07eda93f3e1dd12f3b081c74951d0a57d53d053bd33c4,PodSandboxId:e28425cea2e23d8bcdf4075256c3cea2c0a61efbfbf4a4f2d99b1b0fd46ca7bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172226
0570528201134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 978c0ae27d44e4f25f868861978552ab,},Annotations:map[string]string{io.kubernetes.container.hash: 25f21c04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0921a50fe257a83ea78e146ecf0c71b92c5779cbeb07ab4968f7c1c0c4c612,PodSandboxId:44d2533ee117cad6fc42ee59fb9d3efbb28f220cb2c9cfe7e4f32c62b8e97eec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260570481349335,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cda54b08904e20832fd8542849c6d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f7cfd46-b3de-4726-9e2b-5bf255c9f10b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.811745836Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d52c7ff-0b57-4f0b-bff0-eceef2ae31bc name=/runtime.v1.RuntimeService/Version
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.811819317Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d52c7ff-0b57-4f0b-bff0-eceef2ae31bc name=/runtime.v1.RuntimeService/Version
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.812947580Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d2e2f3e-ef6a-4766-a9c5-8c18b4340322 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.813474324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261553813445729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d2e2f3e-ef6a-4766-a9c5-8c18b4340322 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.814154494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20b49912-c806-4ab5-93e7-f8b2b9f08dd6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.814231869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20b49912-c806-4ab5-93e7-f8b2b9f08dd6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.814556555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04ef4fd91577506a5a1e3c63dca7660c357c1b3e5088df7d2b328b6cc4cd48a,PodSandboxId:f617655f1ffc9341a2eb78456b2f31fd893c3a071d5b1d80802d58823fcc309e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260592757550758,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b577293-6827-4c76-a404-6b53739ae6e9,},Annotations:map[string]string{io.kubernetes.container.hash: b287aae0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52dcd83ed085799e110ef5addf79202e933b852380cb2c894550861772f27194,PodSandboxId:5cba4a42b70da73f69c7016235e805e2de70979f3112456592dfddb73a72437d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591904081965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t29vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4d8867-523f-4115-b3dd-76a9e2765af1,},Annotations:map[string]string{io.kubernetes.container.hash: a47ee6a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f3e11d124588260a25997609789489e01d15234bd3481eddd4d4ebca0e0b97,PodSandboxId:7e0a4668e08d707cb73b7df3a75deddc957cb80f4bd9ede5c8a988c9cd6f93c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591763675527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zlz8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aecbb6c3-53d7-4497-a26f-c41a7795681a,},Annotations:map[string]string{io.kubernetes.container.hash: 9db8795,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6deaf42164b308b808f5f016359c422290a6f03a0cbf54cfda984c1545f973b8,PodSandboxId:b7f6a66020f31c30226795125159442b8fc2cdf4c3b1c2ef5cdf203b04fbafce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722260590931971079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfsk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952c235c-310b-4f82-ba2d-fe06f3556a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a30a1c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbfcefc4958f17f430033a2802d4d325efe3b9f6181f5523ef427df49277357,PodSandboxId:c5ef647f2c6a32fde8e2679b1b55354a6cec2c43a2935898b7026bcff29a2781,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260570544915087
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d36b6bf9ef24d0d4b5362b88fa5c3794,},Annotations:map[string]string{io.kubernetes.container.hash: bf329a7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c623732bc8fb3c27e7c14152546d83951379549a45125f67fb679aa871e404dc,PodSandboxId:2fe988c66f5636c3978b8549517cae3c2cc236900974fafae586223ad5263c93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172226057054
3171555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c62fc2ec64dd484d55d238738e3faa,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4531ab2ab82393edb07eda93f3e1dd12f3b081c74951d0a57d53d053bd33c4,PodSandboxId:e28425cea2e23d8bcdf4075256c3cea2c0a61efbfbf4a4f2d99b1b0fd46ca7bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172226
0570528201134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 978c0ae27d44e4f25f868861978552ab,},Annotations:map[string]string{io.kubernetes.container.hash: 25f21c04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0921a50fe257a83ea78e146ecf0c71b92c5779cbeb07ab4968f7c1c0c4c612,PodSandboxId:44d2533ee117cad6fc42ee59fb9d3efbb28f220cb2c9cfe7e4f32c62b8e97eec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260570481349335,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cda54b08904e20832fd8542849c6d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20b49912-c806-4ab5-93e7-f8b2b9f08dd6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.866427924Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a6f4c17-fe30-4af8-884a-c519cbd1c30d name=/runtime.v1.RuntimeService/Version
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.866602417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a6f4c17-fe30-4af8-884a-c519cbd1c30d name=/runtime.v1.RuntimeService/Version
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.868394080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e50d7cb-d2cd-467c-8e99-7155dd09a963 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.868810583Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261553868789838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e50d7cb-d2cd-467c-8e99-7155dd09a963 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.869273935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46354dc7-76e5-4b63-b807-e1998d2ad612 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.869715847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46354dc7-76e5-4b63-b807-e1998d2ad612 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.870162367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04ef4fd91577506a5a1e3c63dca7660c357c1b3e5088df7d2b328b6cc4cd48a,PodSandboxId:f617655f1ffc9341a2eb78456b2f31fd893c3a071d5b1d80802d58823fcc309e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260592757550758,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b577293-6827-4c76-a404-6b53739ae6e9,},Annotations:map[string]string{io.kubernetes.container.hash: b287aae0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52dcd83ed085799e110ef5addf79202e933b852380cb2c894550861772f27194,PodSandboxId:5cba4a42b70da73f69c7016235e805e2de70979f3112456592dfddb73a72437d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591904081965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t29vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4d8867-523f-4115-b3dd-76a9e2765af1,},Annotations:map[string]string{io.kubernetes.container.hash: a47ee6a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f3e11d124588260a25997609789489e01d15234bd3481eddd4d4ebca0e0b97,PodSandboxId:7e0a4668e08d707cb73b7df3a75deddc957cb80f4bd9ede5c8a988c9cd6f93c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591763675527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zlz8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aecbb6c3-53d7-4497-a26f-c41a7795681a,},Annotations:map[string]string{io.kubernetes.container.hash: 9db8795,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6deaf42164b308b808f5f016359c422290a6f03a0cbf54cfda984c1545f973b8,PodSandboxId:b7f6a66020f31c30226795125159442b8fc2cdf4c3b1c2ef5cdf203b04fbafce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722260590931971079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfsk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952c235c-310b-4f82-ba2d-fe06f3556a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a30a1c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbfcefc4958f17f430033a2802d4d325efe3b9f6181f5523ef427df49277357,PodSandboxId:c5ef647f2c6a32fde8e2679b1b55354a6cec2c43a2935898b7026bcff29a2781,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260570544915087
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d36b6bf9ef24d0d4b5362b88fa5c3794,},Annotations:map[string]string{io.kubernetes.container.hash: bf329a7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c623732bc8fb3c27e7c14152546d83951379549a45125f67fb679aa871e404dc,PodSandboxId:2fe988c66f5636c3978b8549517cae3c2cc236900974fafae586223ad5263c93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172226057054
3171555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c62fc2ec64dd484d55d238738e3faa,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4531ab2ab82393edb07eda93f3e1dd12f3b081c74951d0a57d53d053bd33c4,PodSandboxId:e28425cea2e23d8bcdf4075256c3cea2c0a61efbfbf4a4f2d99b1b0fd46ca7bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172226
0570528201134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 978c0ae27d44e4f25f868861978552ab,},Annotations:map[string]string{io.kubernetes.container.hash: 25f21c04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0921a50fe257a83ea78e146ecf0c71b92c5779cbeb07ab4968f7c1c0c4c612,PodSandboxId:44d2533ee117cad6fc42ee59fb9d3efbb28f220cb2c9cfe7e4f32c62b8e97eec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260570481349335,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cda54b08904e20832fd8542849c6d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46354dc7-76e5-4b63-b807-e1998d2ad612 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.916148666Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49e6eaff-75f3-4616-a964-829abe7419fc name=/runtime.v1.RuntimeService/Version
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.916231890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49e6eaff-75f3-4616-a964-829abe7419fc name=/runtime.v1.RuntimeService/Version
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.918203351Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c044605-5d70-4303-8c8c-aa2025412d4c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.918751489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261553918720626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c044605-5d70-4303-8c8c-aa2025412d4c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.919268990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0504e5b3-f47d-4614-813c-18464fea4f7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.919396994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0504e5b3-f47d-4614-813c-18464fea4f7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:59:13 default-k8s-diff-port-972693 crio[723]: time="2024-07-29 13:59:13.919645876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04ef4fd91577506a5a1e3c63dca7660c357c1b3e5088df7d2b328b6cc4cd48a,PodSandboxId:f617655f1ffc9341a2eb78456b2f31fd893c3a071d5b1d80802d58823fcc309e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260592757550758,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b577293-6827-4c76-a404-6b53739ae6e9,},Annotations:map[string]string{io.kubernetes.container.hash: b287aae0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52dcd83ed085799e110ef5addf79202e933b852380cb2c894550861772f27194,PodSandboxId:5cba4a42b70da73f69c7016235e805e2de70979f3112456592dfddb73a72437d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591904081965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t29vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d4d8867-523f-4115-b3dd-76a9e2765af1,},Annotations:map[string]string{io.kubernetes.container.hash: a47ee6a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f3e11d124588260a25997609789489e01d15234bd3481eddd4d4ebca0e0b97,PodSandboxId:7e0a4668e08d707cb73b7df3a75deddc957cb80f4bd9ede5c8a988c9cd6f93c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260591763675527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zlz8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aecbb6c3-53d7-4497-a26f-c41a7795681a,},Annotations:map[string]string{io.kubernetes.container.hash: 9db8795,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6deaf42164b308b808f5f016359c422290a6f03a0cbf54cfda984c1545f973b8,PodSandboxId:b7f6a66020f31c30226795125159442b8fc2cdf4c3b1c2ef5cdf203b04fbafce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722260590931971079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfsk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952c235c-310b-4f82-ba2d-fe06f3556a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a30a1c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbfcefc4958f17f430033a2802d4d325efe3b9f6181f5523ef427df49277357,PodSandboxId:c5ef647f2c6a32fde8e2679b1b55354a6cec2c43a2935898b7026bcff29a2781,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260570544915087
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d36b6bf9ef24d0d4b5362b88fa5c3794,},Annotations:map[string]string{io.kubernetes.container.hash: bf329a7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c623732bc8fb3c27e7c14152546d83951379549a45125f67fb679aa871e404dc,PodSandboxId:2fe988c66f5636c3978b8549517cae3c2cc236900974fafae586223ad5263c93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172226057054
3171555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c62fc2ec64dd484d55d238738e3faa,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4531ab2ab82393edb07eda93f3e1dd12f3b081c74951d0a57d53d053bd33c4,PodSandboxId:e28425cea2e23d8bcdf4075256c3cea2c0a61efbfbf4a4f2d99b1b0fd46ca7bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172226
0570528201134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 978c0ae27d44e4f25f868861978552ab,},Annotations:map[string]string{io.kubernetes.container.hash: 25f21c04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0921a50fe257a83ea78e146ecf0c71b92c5779cbeb07ab4968f7c1c0c4c612,PodSandboxId:44d2533ee117cad6fc42ee59fb9d3efbb28f220cb2c9cfe7e4f32c62b8e97eec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260570481349335,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-972693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cda54b08904e20832fd8542849c6d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0504e5b3-f47d-4614-813c-18464fea4f7f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f04ef4fd91577       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   f617655f1ffc9       storage-provisioner
	52dcd83ed0857       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   5cba4a42b70da       coredns-7db6d8ff4d-t29vc
	d4f3e11d12458       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   7e0a4668e08d7       coredns-7db6d8ff4d-zlz8m
	6deaf42164b30       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   b7f6a66020f31       kube-proxy-tfsk9
	ebbfcefc4958f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   16 minutes ago      Running             kube-apiserver            2                   c5ef647f2c6a3       kube-apiserver-default-k8s-diff-port-972693
	c623732bc8fb3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   16 minutes ago      Running             kube-controller-manager   2                   2fe988c66f563       kube-controller-manager-default-k8s-diff-port-972693
	9f4531ab2ab82       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   e28425cea2e23       etcd-default-k8s-diff-port-972693
	be0921a50fe25       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   16 minutes ago      Running             kube-scheduler            2                   44d2533ee117c       kube-scheduler-default-k8s-diff-port-972693
	
	
	==> coredns [52dcd83ed085799e110ef5addf79202e933b852380cb2c894550861772f27194] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d4f3e11d124588260a25997609789489e01d15234bd3481eddd4d4ebca0e0b97] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-972693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-972693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=default-k8s-diff-port-972693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_42_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:42:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-972693
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:59:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:58:36 +0000   Mon, 29 Jul 2024 13:42:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:58:36 +0000   Mon, 29 Jul 2024 13:42:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:58:36 +0000   Mon, 29 Jul 2024 13:42:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:58:36 +0000   Mon, 29 Jul 2024 13:42:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.34
	  Hostname:    default-k8s-diff-port-972693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 77c483e8de6e4df683fb384254beda0d
	  System UUID:                77c483e8-de6e-4df6-83fb-384254beda0d
	  Boot ID:                    f19d25a4-acf7-4e59-ad71-5d597d39b42f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-t29vc                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-zlz8m                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-972693                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-972693             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-972693    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-tfsk9                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-972693             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-wwxmx                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-972693 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-972693 event: Registered Node default-k8s-diff-port-972693 in Controller
	
	
	==> dmesg <==
	[  +0.044583] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.854880] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.509754] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.565067] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.640436] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.073909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065148] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.191585] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.121478] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.356058] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.686772] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.065461] kauditd_printk_skb: 130 callbacks suppressed
	[Jul29 13:38] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +5.649934] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.262408] kauditd_printk_skb: 84 callbacks suppressed
	[  +6.053209] kauditd_printk_skb: 2 callbacks suppressed
	[Jul29 13:42] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.590170] systemd-fstab-generator[3579]: Ignoring "noauto" option for root device
	[  +4.743682] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.814625] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[Jul29 13:43] systemd-fstab-generator[4122]: Ignoring "noauto" option for root device
	[  +0.133916] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 13:44] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [9f4531ab2ab82393edb07eda93f3e1dd12f3b081c74951d0a57d53d053bd33c4] <==
	{"level":"info","ts":"2024-07-29T13:42:51.612626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 41d396bb56c46004 elected leader 41d396bb56c46004 at term 2"}
	{"level":"info","ts":"2024-07-29T13:42:51.61653Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"41d396bb56c46004","local-member-attributes":"{Name:default-k8s-diff-port-972693 ClientURLs:[https://192.168.50.34:2379]}","request-path":"/0/members/41d396bb56c46004/attributes","cluster-id":"ab9794b1ad75cdde","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T13:42:51.616688Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:42:51.617078Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:42:51.623335Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:42:51.623391Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T13:42:51.617345Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T13:42:51.62553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ab9794b1ad75cdde","local-member-id":"41d396bb56c46004","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:42:51.626461Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:42:51.630371Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T13:42:51.626887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.34:2379"}
	{"level":"info","ts":"2024-07-29T13:42:51.630568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T13:52:51.920433Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-07-29T13:52:51.930421Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":714,"took":"9.507039ms","hash":236961800,"current-db-size-bytes":2326528,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2326528,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-29T13:52:51.9308Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":236961800,"revision":714,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T13:57:51.937334Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2024-07-29T13:57:51.941065Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":957,"took":"3.226499ms","hash":565769951,"current-db-size-bytes":2326528,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T13:57:51.941136Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":565769951,"revision":957,"compact-revision":714}
	{"level":"warn","ts":"2024-07-29T13:58:25.599569Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6918814351277346770,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-29T13:58:25.688403Z","caller":"traceutil/trace.go:171","msg":"trace[233906317] linearizableReadLoop","detail":"{readStateIndex:1430; appliedIndex:1429; }","duration":"589.708724ms","start":"2024-07-29T13:58:25.098657Z","end":"2024-07-29T13:58:25.688366Z","steps":["trace[233906317] 'read index received'  (duration: 589.551553ms)","trace[233906317] 'applied index is now lower than readState.Index'  (duration: 156.712µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T13:58:25.688594Z","caller":"traceutil/trace.go:171","msg":"trace[747897533] transaction","detail":"{read_only:false; response_revision:1229; number_of_response:1; }","duration":"635.065175ms","start":"2024-07-29T13:58:25.053513Z","end":"2024-07-29T13:58:25.688578Z","steps":["trace[747897533] 'process raft request'  (duration: 634.698688ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:58:25.68864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"589.897395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.34\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-29T13:58:25.688819Z","caller":"traceutil/trace.go:171","msg":"trace[272102945] range","detail":"{range_begin:/registry/masterleases/192.168.50.34; range_end:; response_count:1; response_revision:1229; }","duration":"590.175395ms","start":"2024-07-29T13:58:25.098633Z","end":"2024-07-29T13:58:25.688809Z","steps":["trace[272102945] 'agreement among raft nodes before linearized reading'  (duration: 589.896095ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:58:25.688884Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:58:25.098622Z","time spent":"590.247168ms","remote":"127.0.0.1:46738","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/192.168.50.34\" "}
	{"level":"warn","ts":"2024-07-29T13:58:25.689486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:58:25.053498Z","time spent":"635.163471ms","remote":"127.0.0.1:46930","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":600,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-972693\" mod_revision:1221 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-972693\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-972693\" > >"}
	
	
	==> kernel <==
	 13:59:14 up 21 min,  0 users,  load average: 0.11, 0.11, 0.14
	Linux default-k8s-diff-port-972693 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ebbfcefc4958f17f430033a2802d4d325efe3b9f6181f5523ef427df49277357] <==
	W0729 13:57:53.338213       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:57:53.338381       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 13:57:54.339482       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:57:54.339623       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:57:54.339637       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:57:54.339716       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:57:54.339756       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:57:54.340959       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 13:58:25.690035       1 trace.go:236] Trace[93454431]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d51cbbef-0b60-42fb-a5eb-0d3beef0e030,client:192.168.50.34,api-group:coordination.k8s.io,api-version:v1,name:default-k8s-diff-port-972693,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-diff-port-972693,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PUT (29-Jul-2024 13:58:25.052) (total time: 637ms):
	Trace[93454431]: ["GuaranteedUpdate etcd3" audit-id:d51cbbef-0b60-42fb-a5eb-0d3beef0e030,key:/leases/kube-node-lease/default-k8s-diff-port-972693,type:*coordination.Lease,resource:leases.coordination.k8s.io 637ms (13:58:25.052)
	Trace[93454431]:  ---"Txn call completed" 636ms (13:58:25.689)]
	Trace[93454431]: [637.579811ms] [637.579811ms] END
	I0729 13:58:25.830603       1 trace.go:236] Trace[2138489229]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.34,type:*v1.Endpoints,resource:apiServerIPInfo (29-Jul-2024 13:58:25.098) (total time: 732ms):
	Trace[2138489229]: ---"initial value restored" 591ms (13:58:25.689)
	Trace[2138489229]: ---"Transaction prepared" 138ms (13:58:25.828)
	Trace[2138489229]: [732.324611ms] [732.324611ms] END
	W0729 13:58:54.340375       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:58:54.340566       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 13:58:54.340596       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 13:58:54.341517       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 13:58:54.341548       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 13:58:54.341646       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c623732bc8fb3c27e7c14152546d83951379549a45125f67fb679aa871e404dc] <==
	I0729 13:53:40.354178       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:54:09.821387       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:54:10.362131       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 13:54:19.996762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="173.941µs"
	I0729 13:54:32.986976       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="93.354µs"
	E0729 13:54:39.826907       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:54:40.373510       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:55:09.833003       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:55:10.381140       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:55:39.842541       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:55:40.389409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:56:09.848223       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:56:10.397763       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:56:39.852838       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:56:40.409771       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:57:09.858626       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:57:10.418153       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:57:39.864440       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:57:40.427126       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:58:09.871380       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:58:10.437102       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:58:39.877818       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:58:40.462645       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 13:59:09.883759       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 13:59:10.472329       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6deaf42164b308b808f5f016359c422290a6f03a0cbf54cfda984c1545f973b8] <==
	I0729 13:43:11.338116       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:43:11.362614       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.34"]
	I0729 13:43:11.482883       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:43:11.482932       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:43:11.482957       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:43:11.489752       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:43:11.490030       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:43:11.490072       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:43:11.494582       1 config.go:192] "Starting service config controller"
	I0729 13:43:11.494596       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:43:11.494622       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:43:11.494625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:43:11.495032       1 config.go:319] "Starting node config controller"
	I0729 13:43:11.495038       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:43:11.595399       1 shared_informer.go:320] Caches are synced for node config
	I0729 13:43:11.595447       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:43:11.595478       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [be0921a50fe257a83ea78e146ecf0c71b92c5779cbeb07ab4968f7c1c0c4c612] <==
	W0729 13:42:53.351024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 13:42:53.351036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 13:42:53.351107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:53.351135       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:42:53.351183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:42:53.351212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:42:53.351269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:53.351335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:42:53.351395       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:42:53.351408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 13:42:53.352038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 13:42:53.352075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 13:42:54.195063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:54.195144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 13:42:54.471045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:54.471145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:42:54.483466       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:42:54.483578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 13:42:54.580224       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 13:42:54.580403       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 13:42:54.628493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:54.628616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:42:54.639439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 13:42:54.639551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0729 13:42:56.632357       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:56:55 default-k8s-diff-port-972693 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:56:59 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:56:59.971147    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:57:13 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:57:13.973643    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:57:24 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:57:24.971378    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:57:36 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:57:36.970463    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:57:47 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:57:47.971393    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:57:55 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:57:55.992039    3912 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:57:55 default-k8s-diff-port-972693 kubelet[3912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:57:55 default-k8s-diff-port-972693 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:57:55 default-k8s-diff-port-972693 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:57:55 default-k8s-diff-port-972693 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:58:02 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:58:02.970676    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:58:13 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:58:13.970694    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:58:26 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:58:26.971728    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:58:37 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:58:37.971154    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:58:52 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:58:52.971069    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	Jul 29 13:58:55 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:58:55.992501    3912 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:58:55 default-k8s-diff-port-972693 kubelet[3912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:58:55 default-k8s-diff-port-972693 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:58:55 default-k8s-diff-port-972693 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:58:55 default-k8s-diff-port-972693 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:59:05 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:59:05.986906    3912 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 13:59:05 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:59:05.986989    3912 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 13:59:05 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:59:05.987644    3912 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6s5gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-wwxmx_kube-system(268a70c4-a35d-45c5-9da9-4e1f7dcf52fa): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 29 13:59:05 default-k8s-diff-port-972693 kubelet[3912]: E0729 13:59:05.987715    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-wwxmx" podUID="268a70c4-a35d-45c5-9da9-4e1f7dcf52fa"
	
	
	==> storage-provisioner [f04ef4fd91577506a5a1e3c63dca7660c357c1b3e5088df7d2b328b6cc4cd48a] <==
	I0729 13:43:12.932344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 13:43:12.941923       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 13:43:12.941988       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 13:43:12.954405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 13:43:12.954587       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-972693_976e62ff-6aaa-417e-a6aa-e6b502bc4345!
	I0729 13:43:12.954711       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba8b4bdb-64f1-482b-bd84-282f3fe569f2", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-972693_976e62ff-6aaa-417e-a6aa-e6b502bc4345 became leader
	I0729 13:43:13.055150       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-972693_976e62ff-6aaa-417e-a6aa-e6b502bc4345!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-972693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-wwxmx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-972693 describe pod metrics-server-569cc877fc-wwxmx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-972693 describe pod metrics-server-569cc877fc-wwxmx: exit status 1 (61.268623ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-wwxmx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-972693 describe pod metrics-server-569cc877fc-wwxmx: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (415.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (141.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:55:37.633444  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:56:56.649659  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:57:01.367645  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:57:17.402441  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:57:18.313466  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
E0729 13:57:28.991647  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.227:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-924039 -n old-k8s-version-924039
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 2 (237.355205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-924039" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-924039 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-924039 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.811µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-924039 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 2 (222.219928ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-924039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-924039 logs -n 25: (1.589549587s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-507612 sudo cat                              | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo                                  | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo find                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-507612 sudo crio                             | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-507612                                       | bridge-507612                | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-312895 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:29 UTC |
	|         | disable-driver-mounts-312895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:29 UTC | 29 Jul 24 13:30 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-135920            | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-566777             | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-566777                                   | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-972693  | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC | 29 Jul 24 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:30 UTC |                     |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-135920                 | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-566777                  | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-924039        | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-135920                                  | embed-certs-135920           | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-566777 --memory=2200                     | no-preload-566777            | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-972693       | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-972693 | jenkins | v1.33.1 | 29 Jul 24 13:33 UTC | 29 Jul 24 13:43 UTC |
	|         | default-k8s-diff-port-972693                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-924039             | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC | 29 Jul 24 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-924039                              | old-k8s-version-924039       | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:34:10
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:34:10.969228  301425 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:34:10.969348  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969356  301425 out.go:304] Setting ErrFile to fd 2...
	I0729 13:34:10.969360  301425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:10.969506  301425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:34:10.970007  301425 out.go:298] Setting JSON to false
	I0729 13:34:10.970908  301425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11794,"bootTime":1722248257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:34:10.970971  301425 start.go:139] virtualization: kvm guest
	I0729 13:34:10.973245  301425 out.go:177] * [old-k8s-version-924039] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:34:10.974804  301425 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:34:10.974803  301425 notify.go:220] Checking for updates...
	I0729 13:34:10.977011  301425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:34:10.978270  301425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:34:10.979473  301425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:34:10.980743  301425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:34:10.981923  301425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:34:10.983514  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:34:10.983962  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:10.984049  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:10.998985  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0729 13:34:10.999407  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:10.999928  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:10.999951  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.000306  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.000497  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.002455  301425 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 13:34:11.003702  301425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:34:11.003997  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:34:11.004037  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:11.018707  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I0729 13:34:11.019177  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:11.019653  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:34:11.019676  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:11.019968  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:11.020126  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:34:11.055819  301425 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:34:11.057085  301425 start.go:297] selected driver: kvm2
	I0729 13:34:11.057104  301425 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.057242  301425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:34:11.057967  301425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.058029  301425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:34:11.073706  301425 install.go:137] /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:34:11.074089  301425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:34:11.074169  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:34:11.074188  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:34:11.074240  301425 start.go:340] cluster config:
	{Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:34:11.074366  301425 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:34:11.076296  301425 out.go:177] * Starting "old-k8s-version-924039" primary control-plane node in "old-k8s-version-924039" cluster
	I0729 13:34:09.149068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:11.077828  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:34:11.077869  301425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:34:11.077879  301425 cache.go:56] Caching tarball of preloaded images
	I0729 13:34:11.077959  301425 preload.go:172] Found /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:34:11.077970  301425 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 13:34:11.078069  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:34:11.078241  301425 start.go:360] acquireMachinesLock for old-k8s-version-924039: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:34:15.229067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:18.301058  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:24.381104  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:27.453064  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:33.533067  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:36.605120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:42.685075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:45.757111  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:51.837033  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:34:54.909068  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:00.989073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:04.061125  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:10.141082  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:13.213123  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:19.293109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:22.365061  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:28.445075  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:31.517094  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:37.597080  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:40.669073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:46.749070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:49.821083  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:55.901013  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:35:58.973149  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:05.053098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:08.125109  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:14.205093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:17.277093  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:23.357105  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:26.429122  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:32.509070  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:35.581107  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:41.661120  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:44.733129  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:50.813085  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:53.885117  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:36:59.965073  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:03.037079  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:09.117098  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:12.189049  300705 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.207:22: connect: no route to host
	I0729 13:37:15.193505  300746 start.go:364] duration metric: took 4m36.683808785s to acquireMachinesLock for "no-preload-566777"
	I0729 13:37:15.193569  300746 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:15.193577  300746 fix.go:54] fixHost starting: 
	I0729 13:37:15.193937  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:15.193976  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:15.209623  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0729 13:37:15.210158  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:15.210625  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:37:15.210646  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:15.211001  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:15.211265  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:15.211468  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:37:15.213144  300746 fix.go:112] recreateIfNeeded on no-preload-566777: state=Stopped err=<nil>
	I0729 13:37:15.213185  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	W0729 13:37:15.213349  300746 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:15.215474  300746 out.go:177] * Restarting existing kvm2 VM for "no-preload-566777" ...
	I0729 13:37:15.190804  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:15.190850  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191224  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:37:15.191257  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:37:15.191494  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:37:15.193354  300705 machine.go:97] duration metric: took 4m37.425774293s to provisionDockerMachine
	I0729 13:37:15.193407  300705 fix.go:56] duration metric: took 4m37.447841932s for fixHost
	I0729 13:37:15.193419  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 4m37.447869212s
	W0729 13:37:15.193447  300705 start.go:714] error starting host: provision: host is not running
	W0729 13:37:15.193569  300705 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 13:37:15.193581  300705 start.go:729] Will try again in 5 seconds ...
	I0729 13:37:15.216957  300746 main.go:141] libmachine: (no-preload-566777) Calling .Start
	I0729 13:37:15.217120  300746 main.go:141] libmachine: (no-preload-566777) Ensuring networks are active...
	I0729 13:37:15.217761  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network default is active
	I0729 13:37:15.218067  300746 main.go:141] libmachine: (no-preload-566777) Ensuring network mk-no-preload-566777 is active
	I0729 13:37:15.218451  300746 main.go:141] libmachine: (no-preload-566777) Getting domain xml...
	I0729 13:37:15.219134  300746 main.go:141] libmachine: (no-preload-566777) Creating domain...
	I0729 13:37:16.412301  300746 main.go:141] libmachine: (no-preload-566777) Waiting to get IP...
	I0729 13:37:16.413162  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.413576  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.413670  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.413557  302040 retry.go:31] will retry after 233.512145ms: waiting for machine to come up
	I0729 13:37:16.649335  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.649921  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.649945  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.649885  302040 retry.go:31] will retry after 328.846738ms: waiting for machine to come up
	I0729 13:37:16.980566  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:16.980976  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:16.981022  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:16.980926  302040 retry.go:31] will retry after 329.69915ms: waiting for machine to come up
	I0729 13:37:17.312547  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.312948  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.312977  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.312906  302040 retry.go:31] will retry after 418.810733ms: waiting for machine to come up
	I0729 13:37:17.733615  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:17.734042  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:17.734065  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:17.734009  302040 retry.go:31] will retry after 694.191211ms: waiting for machine to come up
	I0729 13:37:20.196079  300705 start.go:360] acquireMachinesLock for embed-certs-135920: {Name:mk080f325f153c5dcf3942d95ca198ed89cc6e64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:37:18.429670  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:18.430024  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:18.430055  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:18.429973  302040 retry.go:31] will retry after 857.66396ms: waiting for machine to come up
	I0729 13:37:19.289078  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:19.289491  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:19.289521  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:19.289458  302040 retry.go:31] will retry after 994.340261ms: waiting for machine to come up
	I0729 13:37:20.285875  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:20.286308  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:20.286340  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:20.286263  302040 retry.go:31] will retry after 1.052380852s: waiting for machine to come up
	I0729 13:37:21.340435  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:21.340775  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:21.340821  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:21.340743  302040 retry.go:31] will retry after 1.429700498s: waiting for machine to come up
	I0729 13:37:22.772362  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:22.772754  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:22.772782  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:22.772700  302040 retry.go:31] will retry after 1.702185495s: waiting for machine to come up
	I0729 13:37:24.477636  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:24.478074  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:24.478106  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:24.478003  302040 retry.go:31] will retry after 2.649912402s: waiting for machine to come up
	I0729 13:37:27.129797  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:27.130212  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:27.130243  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:27.130159  302040 retry.go:31] will retry after 3.079887428s: waiting for machine to come up
	I0729 13:37:30.213431  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:30.213918  300746 main.go:141] libmachine: (no-preload-566777) DBG | unable to find current IP address of domain no-preload-566777 in network mk-no-preload-566777
	I0729 13:37:30.213958  300746 main.go:141] libmachine: (no-preload-566777) DBG | I0729 13:37:30.213875  302040 retry.go:31] will retry after 3.08003223s: waiting for machine to come up
	I0729 13:37:33.297139  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.297604  300746 main.go:141] libmachine: (no-preload-566777) Found IP for machine: 192.168.61.84
	I0729 13:37:33.297627  300746 main.go:141] libmachine: (no-preload-566777) Reserving static IP address...
	I0729 13:37:33.297639  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has current primary IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.298106  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.298146  300746 main.go:141] libmachine: (no-preload-566777) Reserved static IP address: 192.168.61.84
	I0729 13:37:33.298164  300746 main.go:141] libmachine: (no-preload-566777) DBG | skip adding static IP to network mk-no-preload-566777 - found existing host DHCP lease matching {name: "no-preload-566777", mac: "52:54:00:c4:42:1a", ip: "192.168.61.84"}
	I0729 13:37:33.298178  300746 main.go:141] libmachine: (no-preload-566777) DBG | Getting to WaitForSSH function...
	I0729 13:37:33.298194  300746 main.go:141] libmachine: (no-preload-566777) Waiting for SSH to be available...
	I0729 13:37:33.300310  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300618  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.300653  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.300731  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH client type: external
	I0729 13:37:33.300773  300746 main.go:141] libmachine: (no-preload-566777) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa (-rw-------)
	I0729 13:37:33.300826  300746 main.go:141] libmachine: (no-preload-566777) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:33.300957  300746 main.go:141] libmachine: (no-preload-566777) DBG | About to run SSH command:
	I0729 13:37:33.300985  300746 main.go:141] libmachine: (no-preload-566777) DBG | exit 0
	I0729 13:37:34.861481  301044 start.go:364] duration metric: took 4m23.064160625s to acquireMachinesLock for "default-k8s-diff-port-972693"
	I0729 13:37:34.861564  301044 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:34.861576  301044 fix.go:54] fixHost starting: 
	I0729 13:37:34.862021  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:34.862055  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:34.879106  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0729 13:37:34.879506  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:34.880050  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:37:34.880077  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:34.880423  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:34.880637  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:34.880838  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:37:34.882251  301044 fix.go:112] recreateIfNeeded on default-k8s-diff-port-972693: state=Stopped err=<nil>
	I0729 13:37:34.882284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	W0729 13:37:34.882465  301044 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:34.884611  301044 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-972693" ...
	I0729 13:37:33.420745  300746 main.go:141] libmachine: (no-preload-566777) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:33.421178  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetConfigRaw
	I0729 13:37:33.421861  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.424343  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.424680  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.424710  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.425061  300746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/config.json ...
	I0729 13:37:33.425244  300746 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:33.425262  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:33.425513  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.427708  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.427961  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.427989  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.428171  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.428354  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428528  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.428672  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.428933  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.429139  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.429150  300746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:33.525027  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:33.525065  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525306  300746 buildroot.go:166] provisioning hostname "no-preload-566777"
	I0729 13:37:33.525340  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.525551  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.528124  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528491  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.528529  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.528677  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.528865  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529025  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.529144  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.529286  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.529453  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.529465  300746 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-566777 && echo "no-preload-566777" | sudo tee /etc/hostname
	I0729 13:37:33.638867  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-566777
	
	I0729 13:37:33.638902  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.641406  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641730  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.641762  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.641908  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:33.642112  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642285  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:33.642414  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:33.642555  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:33.642727  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:33.642743  300746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-566777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-566777/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-566777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:33.749760  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:33.749789  300746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:33.749812  300746 buildroot.go:174] setting up certificates
	I0729 13:37:33.749821  300746 provision.go:84] configureAuth start
	I0729 13:37:33.749831  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetMachineName
	I0729 13:37:33.750114  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:33.752924  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753241  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.753264  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.753477  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:33.755385  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755681  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:33.755701  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:33.755840  300746 provision.go:143] copyHostCerts
	I0729 13:37:33.755904  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:33.755926  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:33.756019  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:33.756156  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:33.756169  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:33.756206  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:33.756276  300746 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:33.756286  300746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:33.756317  300746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:33.756380  300746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.no-preload-566777 san=[127.0.0.1 192.168.61.84 localhost minikube no-preload-566777]
	I0729 13:37:34.226953  300746 provision.go:177] copyRemoteCerts
	I0729 13:37:34.227033  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:34.227066  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.229542  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229816  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.229853  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.229966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.230177  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.230314  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.230452  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.310803  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:37:34.334545  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:37:34.357908  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:34.381163  300746 provision.go:87] duration metric: took 631.325967ms to configureAuth
	I0729 13:37:34.381200  300746 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:34.381441  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:37:34.381535  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.383985  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384286  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.384312  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.384473  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.384681  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384862  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.384995  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.385176  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.385393  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.385414  300746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:34.640587  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:34.640615  300746 machine.go:97] duration metric: took 1.215357318s to provisionDockerMachine
	I0729 13:37:34.640628  300746 start.go:293] postStartSetup for "no-preload-566777" (driver="kvm2")
	I0729 13:37:34.640645  300746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:34.640683  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.641067  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:34.641104  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.643711  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644066  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.644097  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.644215  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.644398  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.644555  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.644677  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.723215  300746 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:34.727393  300746 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:34.727425  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:34.727507  300746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:34.727614  300746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:34.727770  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:34.736666  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:34.759678  300746 start.go:296] duration metric: took 119.034973ms for postStartSetup
	I0729 13:37:34.759716  300746 fix.go:56] duration metric: took 19.566140877s for fixHost
	I0729 13:37:34.759748  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.762103  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762468  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.762491  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.762645  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.762843  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763008  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.763111  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.763229  300746 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:34.763392  300746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I0729 13:37:34.763403  300746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:34.861306  300746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260254.835831305
	
	I0729 13:37:34.861333  300746 fix.go:216] guest clock: 1722260254.835831305
	I0729 13:37:34.861341  300746 fix.go:229] Guest: 2024-07-29 13:37:34.835831305 +0000 UTC Remote: 2024-07-29 13:37:34.759720831 +0000 UTC m=+296.387252495 (delta=76.110474ms)
	I0729 13:37:34.861376  300746 fix.go:200] guest clock delta is within tolerance: 76.110474ms
	I0729 13:37:34.861384  300746 start.go:83] releasing machines lock for "no-preload-566777", held for 19.66783585s
	I0729 13:37:34.861413  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.861708  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:34.864181  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864534  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.864567  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.864757  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865296  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865467  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:37:34.865546  300746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:34.865600  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.865726  300746 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:34.865753  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:37:34.868333  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868522  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868772  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868810  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868839  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:34.868859  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:34.868913  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869060  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:37:34.869152  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869209  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:37:34.869300  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869349  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:37:34.869417  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.869551  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:37:34.970978  300746 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:34.978226  300746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:35.128653  300746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:35.134619  300746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:35.134688  300746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:35.150674  300746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:35.150697  300746 start.go:495] detecting cgroup driver to use...
	I0729 13:37:35.150762  300746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:35.166545  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:35.178859  300746 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:35.178913  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:35.197133  300746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:35.214430  300746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:35.337707  300746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:35.467057  300746 docker.go:233] disabling docker service ...
	I0729 13:37:35.467134  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:35.480960  300746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:35.493850  300746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:35.629455  300746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:35.741534  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:35.754886  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:35.773243  300746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 13:37:35.773323  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.783589  300746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:35.783673  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.794150  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.805389  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.816636  300746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:35.828027  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.838467  300746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.856470  300746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:35.866773  300746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:35.876110  300746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:35.876175  300746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:35.889768  300746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:35.909971  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:36.046023  300746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:36.192169  300746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:36.192238  300746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:36.197281  300746 start.go:563] Will wait 60s for crictl version
	I0729 13:37:36.197365  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.201359  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:36.248317  300746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:36.248420  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.276247  300746 ssh_runner.go:195] Run: crio --version
	I0729 13:37:36.306549  300746 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 13:37:34.885944  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Start
	I0729 13:37:34.886114  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring networks are active...
	I0729 13:37:34.886856  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network default is active
	I0729 13:37:34.887211  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Ensuring network mk-default-k8s-diff-port-972693 is active
	I0729 13:37:34.887684  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Getting domain xml...
	I0729 13:37:34.888427  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Creating domain...
	I0729 13:37:36.147265  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting to get IP...
	I0729 13:37:36.148095  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148547  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.148616  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.148516  302181 retry.go:31] will retry after 191.117257ms: waiting for machine to come up
	I0729 13:37:36.340984  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.341507  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.341444  302181 retry.go:31] will retry after 285.557329ms: waiting for machine to come up
	I0729 13:37:36.629066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629670  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:36.629698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:36.629621  302181 retry.go:31] will retry after 397.294163ms: waiting for machine to come up
	I0729 13:37:36.307930  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetIP
	I0729 13:37:36.311057  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311389  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:37:36.311417  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:37:36.311699  300746 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:36.316257  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:36.330109  300746 kubeadm.go:883] updating cluster {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:36.330268  300746 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 13:37:36.330320  300746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:36.367218  300746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 13:37:36.367250  300746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:37:36.367327  300746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.367333  300746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.367394  300746 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 13:37:36.367404  300746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.367432  300746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.367353  300746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.367412  300746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.367743  300746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.369020  300746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.369125  300746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.369150  300746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.369203  300746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.369015  300746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.369484  300746 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 13:37:36.369609  300746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:36.369763  300746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.560256  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.600945  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.604476  300746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 13:37:36.604539  300746 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.604592  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.606566  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 13:37:36.649109  300746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 13:37:36.649210  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:37:36.649212  300746 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.649328  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.696863  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.698623  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.713816  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.727059  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.764110  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.764204  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:37:36.764208  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.784479  300746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 13:37:36.784542  300746 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.784558  300746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 13:37:36.784597  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.784598  300746 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.784694  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.813445  300746 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 13:37:36.813491  300746 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.813544  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.825275  300746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 13:37:36.825290  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 13:37:36.825392  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825463  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 13:37:36.825327  300746 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:36.825515  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:36.852786  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:37:36.852866  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:36.852822  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:37:36.852843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 13:37:36.852984  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:37.587824  300746 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:37.028009  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028349  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.028378  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.028295  302181 retry.go:31] will retry after 507.597159ms: waiting for machine to come up
	I0729 13:37:37.538138  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538550  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:37.538581  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:37.538507  302181 retry.go:31] will retry after 508.855087ms: waiting for machine to come up
	I0729 13:37:38.049628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050241  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.050277  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.050198  302181 retry.go:31] will retry after 889.089993ms: waiting for machine to come up
	I0729 13:37:38.940541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941066  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:38.941096  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:38.941009  302181 retry.go:31] will retry after 891.889885ms: waiting for machine to come up
	I0729 13:37:39.834956  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835395  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:39.835423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:39.835341  302181 retry.go:31] will retry after 1.030799215s: waiting for machine to come up
	I0729 13:37:40.867814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868336  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:40.868367  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:40.868283  302181 retry.go:31] will retry after 1.40369357s: waiting for machine to come up
	I0729 13:37:38.870850  300746 ssh_runner.go:235] Completed: which crictl: (2.045307778s)
	I0729 13:37:38.870925  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:37:38.870921  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.045429354s)
	I0729 13:37:38.870946  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 13:37:38.871001  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0: (2.018116939s)
	I0729 13:37:38.871024  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.01808875s)
	I0729 13:37:38.871054  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871083  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.018080011s)
	I0729 13:37:38.871109  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 13:37:38.871120  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871056  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 13:37:38.871166  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 13:37:38.871151  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:38.871234  300746 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0: (2.018278547s)
	I0729 13:37:38.871247  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:38.871259  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 13:37:38.871304  300746 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.283446632s)
	I0729 13:37:38.871343  300746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 13:37:38.871372  300746 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:38.871406  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:37:38.871310  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:38.939395  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:38.939419  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 13:37:38.939532  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:40.939632  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.068434649s)
	I0729 13:37:40.939669  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 13:37:40.939693  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939702  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.068259157s)
	I0729 13:37:40.939734  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 13:37:40.939761  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 13:37:40.939794  300746 ssh_runner.go:235] Completed: which crictl: (2.068372626s)
	I0729 13:37:40.939827  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.068564103s)
	I0729 13:37:40.939843  300746 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:37:40.939844  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.000295325s)
	I0729 13:37:40.939847  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 13:37:40.939856  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 13:37:40.999406  300746 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 13:37:40.999505  300746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:43.015187  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.075399061s)
	I0729 13:37:43.015226  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 13:37:43.015243  300746 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.015694914s)
	I0729 13:37:43.015259  300746 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:43.015279  300746 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 13:37:43.015313  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 13:37:42.273822  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:42.274326  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:42.274251  302181 retry.go:31] will retry after 2.255017939s: waiting for machine to come up
	I0729 13:37:44.531432  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531845  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:44.531873  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:44.531801  302181 retry.go:31] will retry after 2.272405743s: waiting for machine to come up
	I0729 13:37:46.401061  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.385713069s)
	I0729 13:37:46.401109  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 13:37:46.401147  300746 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:46.401207  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 13:37:48.358628  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.9573934s)
	I0729 13:37:48.358659  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 13:37:48.358682  300746 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:48.358733  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 13:37:46.806043  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806654  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:46.806681  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:46.806599  302181 retry.go:31] will retry after 2.212726673s: waiting for machine to come up
	I0729 13:37:49.022244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022732  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | unable to find current IP address of domain default-k8s-diff-port-972693 in network mk-default-k8s-diff-port-972693
	I0729 13:37:49.022770  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | I0729 13:37:49.022677  302181 retry.go:31] will retry after 3.071460325s: waiting for machine to come up
	I0729 13:37:50.216727  300746 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.857925776s)
	I0729 13:37:50.216769  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 13:37:50.216822  300746 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.216879  300746 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 13:37:50.862685  300746 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 13:37:50.862738  300746 cache_images.go:123] Successfully loaded all cached images
	I0729 13:37:50.862746  300746 cache_images.go:92] duration metric: took 14.49548231s to LoadCachedImages
	I0729 13:37:50.862763  300746 kubeadm.go:934] updating node { 192.168.61.84 8443 v1.31.0-beta.0 crio true true} ...
	I0729 13:37:50.862924  300746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-566777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:50.863021  300746 ssh_runner.go:195] Run: crio config
	I0729 13:37:50.911526  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:50.911551  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:50.911563  300746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:50.911593  300746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.84 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-566777 NodeName:no-preload-566777 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:50.911782  300746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-566777"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:50.911856  300746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 13:37:50.922091  300746 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:50.922162  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:50.931275  300746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 13:37:50.947494  300746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 13:37:50.963108  300746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 13:37:50.979666  300746 ssh_runner.go:195] Run: grep 192.168.61.84	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:50.983215  300746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:50.994627  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:51.117275  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:51.134412  300746 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777 for IP: 192.168.61.84
	I0729 13:37:51.134439  300746 certs.go:194] generating shared ca certs ...
	I0729 13:37:51.134461  300746 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:51.134641  300746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:51.134692  300746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:51.134703  300746 certs.go:256] generating profile certs ...
	I0729 13:37:51.134825  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/client.key
	I0729 13:37:51.134901  300746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key.445c667e
	I0729 13:37:51.134962  300746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key
	I0729 13:37:51.135114  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:51.135153  300746 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:51.135166  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:51.135196  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:51.135225  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:51.135256  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:51.135309  300746 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:51.136036  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:51.169507  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:51.201916  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:51.227860  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:51.263617  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:37:51.288105  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:37:51.314837  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:51.343892  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/no-preload-566777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:37:51.367328  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:51.389470  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:51.411446  300746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:51.433270  300746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:51.448939  300746 ssh_runner.go:195] Run: openssl version
	I0729 13:37:51.454475  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:51.465080  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469541  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.469605  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:51.475366  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:51.485979  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:51.496382  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500511  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.500571  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:51.505997  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:51.516733  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:51.527637  300746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531754  300746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.531797  300746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:51.537237  300746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:51.548006  300746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:51.552581  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:51.558414  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:51.563879  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:51.569869  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:51.575800  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:37:51.581525  300746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:37:51.587642  300746 kubeadm.go:392] StartCluster: {Name:no-preload-566777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-566777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:37:51.587777  300746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:37:51.587828  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.627118  300746 cri.go:89] found id: ""
	I0729 13:37:51.627212  300746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:37:51.637686  300746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:37:51.637711  300746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:37:51.637765  300746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:37:51.647368  300746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:37:51.648291  300746 kubeconfig.go:125] found "no-preload-566777" server: "https://192.168.61.84:8443"
	I0729 13:37:51.650296  300746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:37:51.659616  300746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.84
	I0729 13:37:51.659649  300746 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:37:51.659663  300746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:37:51.659714  300746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:37:51.700636  300746 cri.go:89] found id: ""
	I0729 13:37:51.700703  300746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:37:51.718225  300746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:37:51.728237  300746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:37:51.728257  300746 kubeadm.go:157] found existing configuration files:
	
	I0729 13:37:51.728303  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:37:51.738280  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:37:51.738364  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:37:51.748770  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:37:51.758572  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:37:51.758649  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:37:51.769634  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.779757  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:37:51.779827  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:37:51.790745  300746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:37:51.801212  300746 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:37:51.801275  300746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:37:51.811706  300746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:37:51.821251  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:51.933905  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.401823  301425 start.go:364] duration metric: took 3m42.323534375s to acquireMachinesLock for "old-k8s-version-924039"
	I0729 13:37:53.401902  301425 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:53.401914  301425 fix.go:54] fixHost starting: 
	I0729 13:37:53.402310  301425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:37:53.402344  301425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:53.421973  301425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0729 13:37:53.422456  301425 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:53.423079  301425 main.go:141] libmachine: Using API Version  1
	I0729 13:37:53.423112  301425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:53.423508  301425 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:53.423734  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:37:53.423883  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetState
	I0729 13:37:53.425687  301425 fix.go:112] recreateIfNeeded on old-k8s-version-924039: state=Stopped err=<nil>
	I0729 13:37:53.425733  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	W0729 13:37:53.425902  301425 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:53.427931  301425 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-924039" ...
	I0729 13:37:52.097443  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.097870  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Found IP for machine: 192.168.50.34
	I0729 13:37:52.097904  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserving static IP address...
	I0729 13:37:52.097923  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has current primary IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.098329  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.098357  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Reserved static IP address: 192.168.50.34
	I0729 13:37:52.098377  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | skip adding static IP to network mk-default-k8s-diff-port-972693 - found existing host DHCP lease matching {name: "default-k8s-diff-port-972693", mac: "52:54:00:be:67:cb", ip: "192.168.50.34"}
	I0729 13:37:52.098406  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Waiting for SSH to be available...
	I0729 13:37:52.098423  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Getting to WaitForSSH function...
	I0729 13:37:52.100530  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.100878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.100908  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.101029  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH client type: external
	I0729 13:37:52.101062  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa (-rw-------)
	I0729 13:37:52.101106  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:37:52.101121  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | About to run SSH command:
	I0729 13:37:52.101145  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | exit 0
	I0729 13:37:52.225041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | SSH cmd err, output: <nil>: 
	I0729 13:37:52.225381  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetConfigRaw
	I0729 13:37:52.226001  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.228722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229109  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.229140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.229315  301044 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/config.json ...
	I0729 13:37:52.229522  301044 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:52.229541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:52.229716  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.231823  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232140  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.232181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.232260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.232446  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232613  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.232758  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.232913  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.233100  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.233111  301044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:52.336948  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:37:52.336978  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337288  301044 buildroot.go:166] provisioning hostname "default-k8s-diff-port-972693"
	I0729 13:37:52.337321  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.337552  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.340284  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340598  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.340623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.340724  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.340913  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341090  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.341261  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.341419  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.341591  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.341603  301044 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-972693 && echo "default-k8s-diff-port-972693" | sudo tee /etc/hostname
	I0729 13:37:52.455264  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-972693
	
	I0729 13:37:52.455294  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.457937  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458304  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.458332  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.458465  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.458667  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458857  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.458995  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.459170  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.459352  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.459376  301044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-972693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-972693/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-972693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:52.570543  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:52.570578  301044 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:37:52.570603  301044 buildroot.go:174] setting up certificates
	I0729 13:37:52.570617  301044 provision.go:84] configureAuth start
	I0729 13:37:52.570628  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetMachineName
	I0729 13:37:52.570900  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:52.573309  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573609  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.573641  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.573751  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.575826  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.576177  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.576344  301044 provision.go:143] copyHostCerts
	I0729 13:37:52.576414  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:37:52.576483  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:37:52.576568  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:37:52.576698  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:37:52.576707  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:37:52.576728  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:37:52.576786  301044 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:37:52.576815  301044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:37:52.576845  301044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:37:52.576902  301044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-972693 san=[127.0.0.1 192.168.50.34 default-k8s-diff-port-972693 localhost minikube]
	I0729 13:37:52.764928  301044 provision.go:177] copyRemoteCerts
	I0729 13:37:52.764988  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:52.765018  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.767540  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.767842  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.767872  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.768041  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.768213  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.768362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.768474  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:52.847615  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:52.877666  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 13:37:52.901219  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:37:52.924922  301044 provision.go:87] duration metric: took 354.279838ms to configureAuth
	I0729 13:37:52.924953  301044 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:52.925157  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:37:52.925244  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:52.927791  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928150  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:52.928181  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:52.928340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:52.928533  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:52.928830  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:52.928978  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:52.929208  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:52.929230  301044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:37:53.176359  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:37:53.176391  301044 machine.go:97] duration metric: took 946.853063ms to provisionDockerMachine
	I0729 13:37:53.176404  301044 start.go:293] postStartSetup for "default-k8s-diff-port-972693" (driver="kvm2")
	I0729 13:37:53.176419  301044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:37:53.176441  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.176782  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:37:53.176818  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.179340  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.179698  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.179858  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.180053  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.180214  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.180336  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.259826  301044 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:37:53.264059  301044 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:37:53.264087  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:37:53.264155  301044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:37:53.264239  301044 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:37:53.264345  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:37:53.273954  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:53.297340  301044 start.go:296] duration metric: took 120.913486ms for postStartSetup
	I0729 13:37:53.297392  301044 fix.go:56] duration metric: took 18.435815853s for fixHost
	I0729 13:37:53.297421  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.299859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300187  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.300218  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.300362  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.300576  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300755  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.300932  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.301116  301044 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:53.301314  301044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0729 13:37:53.301324  301044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:37:53.401628  301044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260273.369344581
	
	I0729 13:37:53.401671  301044 fix.go:216] guest clock: 1722260273.369344581
	I0729 13:37:53.401682  301044 fix.go:229] Guest: 2024-07-29 13:37:53.369344581 +0000 UTC Remote: 2024-07-29 13:37:53.297397345 +0000 UTC m=+281.644280810 (delta=71.947236ms)
	I0729 13:37:53.401705  301044 fix.go:200] guest clock delta is within tolerance: 71.947236ms
	I0729 13:37:53.401711  301044 start.go:83] releasing machines lock for "default-k8s-diff-port-972693", held for 18.540175489s
	I0729 13:37:53.401760  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.402061  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:53.404813  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405182  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.405207  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.405359  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.405844  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:37:53.406153  301044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:37:53.406210  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.406289  301044 ssh_runner.go:195] Run: cat /version.json
	I0729 13:37:53.406315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:37:53.409060  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409351  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409460  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409623  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.409814  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.409878  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:53.409909  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:53.409992  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410092  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:37:53.410183  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.410315  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:37:53.410435  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:37:53.410631  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:37:53.510289  301044 ssh_runner.go:195] Run: systemctl --version
	I0729 13:37:53.517635  301044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:37:53.660575  301044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:37:53.668128  301044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:37:53.668207  301044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:37:53.690732  301044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:37:53.690764  301044 start.go:495] detecting cgroup driver to use...
	I0729 13:37:53.690838  301044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:37:53.707461  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:37:53.721922  301044 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:37:53.722004  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:37:53.740941  301044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:37:53.759323  301044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:37:53.900344  301044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:37:54.065647  301044 docker.go:233] disabling docker service ...
	I0729 13:37:54.065780  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:37:54.082468  301044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:37:54.098283  301044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:37:54.213104  301044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:37:54.339560  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:37:54.360412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:37:54.384836  301044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:37:54.384900  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.400889  301044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:37:54.400980  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.416941  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.433090  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.449306  301044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:37:54.461742  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.477135  301044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.501431  301044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:37:54.519646  301044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:37:54.532995  301044 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:37:54.533074  301044 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:37:54.550639  301044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:37:54.561896  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:54.710789  301044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:37:54.885480  301044 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:37:54.885558  301044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:37:54.890556  301044 start.go:563] Will wait 60s for crictl version
	I0729 13:37:54.890629  301044 ssh_runner.go:195] Run: which crictl
	I0729 13:37:54.894644  301044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:37:54.941141  301044 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:37:54.941236  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:54.983380  301044 ssh_runner.go:195] Run: crio --version
	I0729 13:37:55.027770  301044 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:37:53.429298  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .Start
	I0729 13:37:53.429471  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring networks are active...
	I0729 13:37:53.430263  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network default is active
	I0729 13:37:53.430649  301425 main.go:141] libmachine: (old-k8s-version-924039) Ensuring network mk-old-k8s-version-924039 is active
	I0729 13:37:53.431011  301425 main.go:141] libmachine: (old-k8s-version-924039) Getting domain xml...
	I0729 13:37:53.431825  301425 main.go:141] libmachine: (old-k8s-version-924039) Creating domain...
	I0729 13:37:54.749878  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting to get IP...
	I0729 13:37:54.751148  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.751716  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.751784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.751696  302377 retry.go:31] will retry after 230.330776ms: waiting for machine to come up
	I0729 13:37:54.984551  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:54.985138  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:54.985183  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:54.985094  302377 retry.go:31] will retry after 291.000555ms: waiting for machine to come up
	I0729 13:37:55.277730  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.278199  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.278220  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.278152  302377 retry.go:31] will retry after 360.474919ms: waiting for machine to come up
	I0729 13:37:55.640675  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:55.641255  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:55.641288  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:55.641207  302377 retry.go:31] will retry after 480.424143ms: waiting for machine to come up
	I0729 13:37:55.029239  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetIP
	I0729 13:37:55.032722  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033225  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:37:55.033257  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:37:55.033668  301044 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 13:37:55.038429  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:55.056198  301044 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:37:55.056373  301044 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:37:55.056440  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:55.100534  301044 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:37:55.100612  301044 ssh_runner.go:195] Run: which lz4
	I0729 13:37:55.105708  301044 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:37:55.110384  301044 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:37:55.110417  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:37:56.630726  301044 crio.go:462] duration metric: took 1.525047583s to copy over tarball
	I0729 13:37:56.630816  301044 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:37:53.446825  300746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.51288234s)
	I0729 13:37:53.446866  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.663105  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.740482  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:37:53.823641  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:37:53.823753  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.324001  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.824299  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:37:54.933931  300746 api_server.go:72] duration metric: took 1.11028623s to wait for apiserver process to appear ...
	I0729 13:37:54.933969  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:37:54.933996  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:54.934563  300746 api_server.go:269] stopped: https://192.168.61.84:8443/healthz: Get "https://192.168.61.84:8443/healthz": dial tcp 192.168.61.84:8443: connect: connection refused
	I0729 13:37:55.434598  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.005676  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.005719  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.005737  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.066371  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:37:58.066408  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:37:58.434268  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.439205  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.439240  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:58.934796  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:58.944368  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:58.944399  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.434576  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.443061  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:37:59.443098  300746 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:37:59.934805  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:37:59.943892  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:37:59.955156  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:37:59.955185  300746 api_server.go:131] duration metric: took 5.021207326s to wait for apiserver health ...
	I0729 13:37:59.955197  300746 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.955205  300746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:00.307264  300746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:37:56.123854  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.124460  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.124487  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.124433  302377 retry.go:31] will retry after 529.614291ms: waiting for machine to come up
	I0729 13:37:56.656136  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:56.656626  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:56.656657  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:56.656599  302377 retry.go:31] will retry after 794.429248ms: waiting for machine to come up
	I0729 13:37:57.452523  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:57.453001  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:57.453033  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:57.452952  302377 retry.go:31] will retry after 1.140583184s: waiting for machine to come up
	I0729 13:37:58.594636  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:58.595067  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:58.595109  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:58.595024  302377 retry.go:31] will retry after 894.563974ms: waiting for machine to come up
	I0729 13:37:59.491447  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:37:59.492094  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:37:59.492120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:37:59.491993  302377 retry.go:31] will retry after 1.145531829s: waiting for machine to come up
	I0729 13:38:00.639387  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:00.639807  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:00.639838  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:00.639754  302377 retry.go:31] will retry after 1.949675091s: waiting for machine to come up
	I0729 13:37:58.983188  301044 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.352336314s)
	I0729 13:37:58.983233  301044 crio.go:469] duration metric: took 2.352468802s to extract the tarball
	I0729 13:37:58.983245  301044 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:37:59.022539  301044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:37:59.086881  301044 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:37:59.086913  301044 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:37:59.086924  301044 kubeadm.go:934] updating node { 192.168.50.34 8444 v1.30.3 crio true true} ...
	I0729 13:37:59.087062  301044 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-972693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:37:59.087158  301044 ssh_runner.go:195] Run: crio config
	I0729 13:37:59.144128  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:37:59.144163  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:37:59.144182  301044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:37:59.144209  301044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.34 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-972693 NodeName:default-k8s-diff-port-972693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:37:59.144376  301044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.34
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-972693"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:37:59.144452  301044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:37:59.154648  301044 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:37:59.154717  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:37:59.164572  301044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0729 13:37:59.182967  301044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:37:59.202507  301044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0729 13:37:59.221603  301044 ssh_runner.go:195] Run: grep 192.168.50.34	control-plane.minikube.internal$ /etc/hosts
	I0729 13:37:59.226646  301044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:37:59.244199  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:37:59.390312  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:37:59.411152  301044 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693 for IP: 192.168.50.34
	I0729 13:37:59.411178  301044 certs.go:194] generating shared ca certs ...
	I0729 13:37:59.411213  301044 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:37:59.411421  301044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:37:59.411481  301044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:37:59.411495  301044 certs.go:256] generating profile certs ...
	I0729 13:37:59.411614  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/client.key
	I0729 13:37:59.411709  301044 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key.0cff1f82
	I0729 13:37:59.411780  301044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key
	I0729 13:37:59.411977  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:37:59.412036  301044 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:37:59.412052  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:37:59.412090  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:37:59.412124  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:37:59.412156  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:37:59.412221  301044 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:37:59.413262  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:37:59.450186  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:37:59.496339  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:37:59.535462  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:37:59.569433  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 13:37:59.602826  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:37:59.639581  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:37:59.672966  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/default-k8s-diff-port-972693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:37:59.707007  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:37:59.741894  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:37:59.771364  301044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:37:59.802928  301044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:37:59.828730  301044 ssh_runner.go:195] Run: openssl version
	I0729 13:37:59.837356  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:37:59.855071  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861707  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.861781  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:37:59.870815  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:37:59.884842  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:37:59.899473  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904238  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.904312  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:37:59.910221  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:37:59.923542  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:37:59.936729  301044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943440  301044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.943496  301044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:37:59.951099  301044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:37:59.964578  301044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:37:59.969476  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:37:59.975715  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:37:59.981719  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:37:59.987788  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:37:59.993753  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:00.000228  301044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:00.007898  301044 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-972693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-972693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:00.008033  301044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:00.008091  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.054999  301044 cri.go:89] found id: ""
	I0729 13:38:00.055097  301044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:00.069066  301044 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:00.069090  301044 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:00.069148  301044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:00.083486  301044 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:00.084538  301044 kubeconfig.go:125] found "default-k8s-diff-port-972693" server: "https://192.168.50.34:8444"
	I0729 13:38:00.086623  301044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:00.099514  301044 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.34
	I0729 13:38:00.099555  301044 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:00.099570  301044 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:00.099644  301044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:00.137643  301044 cri.go:89] found id: ""
	I0729 13:38:00.137726  301044 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:00.157036  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:00.168591  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:00.168614  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:00.168664  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:38:00.178379  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:00.178449  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:00.189688  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:38:00.199323  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:00.199388  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:00.209351  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.219100  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:00.219171  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:00.228754  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:38:00.238453  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:00.238526  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:00.248479  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:00.258717  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.377121  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:00.413128  300746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:00.424610  300746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:00.446537  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:01.601214  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:01.601265  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:01.601278  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:01.601296  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:01.601305  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:01.601312  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:38:01.601323  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:01.601332  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:01.601346  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:01.601357  300746 system_pods.go:74] duration metric: took 1.154789909s to wait for pod list to return data ...
	I0729 13:38:01.601370  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:02.057111  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:02.057149  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:02.057182  300746 node_conditions.go:105] duration metric: took 455.806302ms to run NodePressure ...
	I0729 13:38:02.057210  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.420014  300746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426444  300746 kubeadm.go:739] kubelet initialised
	I0729 13:38:02.426467  300746 kubeadm.go:740] duration metric: took 6.420611ms waiting for restarted kubelet to initialise ...
	I0729 13:38:02.426478  300746 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:02.431168  300746 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.436892  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436916  300746 pod_ready.go:81] duration metric: took 5.728016ms for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.436925  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.436932  300746 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.443079  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443102  300746 pod_ready.go:81] duration metric: took 6.163444ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.443110  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "etcd-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.443115  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.447945  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447964  300746 pod_ready.go:81] duration metric: took 4.843364ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.447973  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-apiserver-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.447980  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.457004  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457027  300746 pod_ready.go:81] duration metric: took 9.037058ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.457038  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.457045  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:02.825208  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825246  300746 pod_ready.go:81] duration metric: took 368.180356ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:02.825259  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-proxy-ql6wf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:02.825268  300746 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.225868  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.225975  300746 pod_ready.go:81] duration metric: took 400.697293ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.225993  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "kube-scheduler-no-preload-566777" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.226003  300746 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:03.627568  300746 pod_ready.go:97] node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627605  300746 pod_ready.go:81] duration metric: took 401.589314ms for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:03.627618  300746 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-566777" hosting pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:03.627628  300746 pod_ready.go:38] duration metric: took 1.201138036s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:03.627651  300746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:03.646855  300746 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:03.646893  300746 kubeadm.go:597] duration metric: took 12.009173344s to restartPrimaryControlPlane
	I0729 13:38:03.646910  300746 kubeadm.go:394] duration metric: took 12.059279913s to StartCluster
	I0729 13:38:03.646936  300746 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.647029  300746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:03.649213  300746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:03.649527  300746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:03.649810  300746 config.go:182] Loaded profile config "no-preload-566777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 13:38:03.649861  300746 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:03.649931  300746 addons.go:69] Setting storage-provisioner=true in profile "no-preload-566777"
	I0729 13:38:03.649962  300746 addons.go:234] Setting addon storage-provisioner=true in "no-preload-566777"
	W0729 13:38:03.649974  300746 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:03.650021  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650400  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.650428  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.650493  300746 addons.go:69] Setting default-storageclass=true in profile "no-preload-566777"
	I0729 13:38:03.650533  300746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-566777"
	I0729 13:38:03.650601  300746 addons.go:69] Setting metrics-server=true in profile "no-preload-566777"
	I0729 13:38:03.650631  300746 addons.go:234] Setting addon metrics-server=true in "no-preload-566777"
	W0729 13:38:03.650642  300746 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:03.650675  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.650985  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651014  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651029  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.651054  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.651324  300746 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:03.652887  300746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:03.670088  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0729 13:38:03.670283  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0729 13:38:03.670694  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.670769  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.671418  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671423  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.671437  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671440  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.671755  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0729 13:38:03.671900  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.671927  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.672491  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.672515  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.672711  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.673183  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.673207  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.673468  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.673480  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.673857  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.674012  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.677726  300746 addons.go:234] Setting addon default-storageclass=true in "no-preload-566777"
	W0729 13:38:03.677746  300746 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:03.677777  300746 host.go:66] Checking if "no-preload-566777" exists ...
	I0729 13:38:03.678133  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.678151  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.692817  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0729 13:38:03.693446  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.693919  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.693945  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.694335  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.694504  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.694718  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0729 13:38:03.695225  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.695726  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.695744  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.696028  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.696154  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.696514  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.697635  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.698597  300746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:03.699466  300746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:03.700447  300746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:03.700463  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:03.700481  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.701375  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:03.701390  300746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:03.701404  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.705199  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705225  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705844  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705866  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705893  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.705911  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.705946  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706143  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.706313  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.706471  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.706755  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.708988  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.710193  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I0729 13:38:03.710735  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.711282  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.711296  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.711684  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.712271  300746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:03.712322  300746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:03.712966  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.713103  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.756710  300746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I0729 13:38:03.757254  300746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:03.757760  300746 main.go:141] libmachine: Using API Version  1
	I0729 13:38:03.757784  300746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:03.758125  300746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:03.758376  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetState
	I0729 13:38:03.760315  300746 main.go:141] libmachine: (no-preload-566777) Calling .DriverName
	I0729 13:38:03.760577  300746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:03.760594  300746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:03.760612  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHHostname
	I0729 13:38:03.763679  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.764208  300746 main.go:141] libmachine: (no-preload-566777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:42:1a", ip: ""} in network mk-no-preload-566777: {Iface:virbr3 ExpiryTime:2024-07-29 14:37:26 +0000 UTC Type:0 Mac:52:54:00:c4:42:1a Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:no-preload-566777 Clientid:01:52:54:00:c4:42:1a}
	I0729 13:38:03.764277  300746 main.go:141] libmachine: (no-preload-566777) DBG | domain no-preload-566777 has defined IP address 192.168.61.84 and MAC address 52:54:00:c4:42:1a in network mk-no-preload-566777
	I0729 13:38:03.765045  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHPort
	I0729 13:38:03.765227  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHKeyPath
	I0729 13:38:03.765386  300746 main.go:141] libmachine: (no-preload-566777) Calling .GetSSHUsername
	I0729 13:38:03.765546  300746 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/no-preload-566777/id_rsa Username:docker}
	I0729 13:38:03.883257  300746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:03.905104  300746 node_ready.go:35] waiting up to 6m0s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:03.985382  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:03.985412  300746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:04.014094  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:04.014119  300746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:04.016390  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:04.047695  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:04.062249  300746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:04.062328  300746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:04.095999  300746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:05.473341  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4569173s)
	I0729 13:38:05.473396  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473409  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.473421  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.425688075s)
	I0729 13:38:05.473547  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.473558  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474089  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.474117  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474129  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.474133  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474137  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.474142  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474158  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.474148  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.474213  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.475707  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.475738  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.475746  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.476002  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.476095  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.476124  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.490038  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.490081  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.490420  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.490440  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562064  300746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46596112s)
	I0729 13:38:05.562122  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562136  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.562492  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.562516  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.562532  300746 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:05.562541  300746 main.go:141] libmachine: (no-preload-566777) Calling .Close
	I0729 13:38:05.564397  300746 main.go:141] libmachine: (no-preload-566777) DBG | Closing plugin on server side
	I0729 13:38:05.564410  300746 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:05.564448  300746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:05.564471  300746 addons.go:475] Verifying addon metrics-server=true in "no-preload-566777"
	I0729 13:38:05.566888  300746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:38:02.590640  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:02.591134  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:02.591162  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:02.591087  302377 retry.go:31] will retry after 1.765945358s: waiting for machine to come up
	I0729 13:38:04.358332  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:04.358934  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:04.358963  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:04.358899  302377 retry.go:31] will retry after 2.923224015s: waiting for machine to come up
	I0729 13:38:01.713425  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.33625836s)
	I0729 13:38:01.713462  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:01.941164  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.017707  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:02.134991  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:02.135105  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:02.636248  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.135563  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:03.264470  301044 api_server.go:72] duration metric: took 1.129485078s to wait for apiserver process to appear ...
	I0729 13:38:03.264512  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:03.264545  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.392570  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.392609  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.392626  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.423076  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:06.423120  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:06.764837  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:06.770393  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:06.770428  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.264879  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.269632  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:07.269670  301044 api_server.go:103] status: https://192.168.50.34:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:07.764878  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:38:07.770291  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:38:07.781660  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:07.781691  301044 api_server.go:131] duration metric: took 4.517171532s to wait for apiserver health ...
	I0729 13:38:07.781700  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:38:07.781707  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:07.784769  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:05.568441  300746 addons.go:510] duration metric: took 1.918571396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:38:05.916109  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:07.284234  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:07.284764  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:07.284819  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:07.284694  302377 retry.go:31] will retry after 2.9786525s: waiting for machine to come up
	I0729 13:38:10.265771  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:10.266128  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | unable to find current IP address of domain old-k8s-version-924039 in network mk-old-k8s-version-924039
	I0729 13:38:10.266161  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | I0729 13:38:10.266077  302377 retry.go:31] will retry after 5.044155966s: waiting for machine to come up
	I0729 13:38:07.786038  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:07.824838  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:07.850139  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:07.862900  301044 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:07.862932  301044 system_pods.go:61] "coredns-7db6d8ff4d-zllk5" [3ebb659a-7849-498b-a81c-54f75c8e1536] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:07.862943  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [fc5c7286-5cd4-4eeb-879e-6263f82c4164] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:07.862950  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [a3a13c0b-844d-4a5b-93a0-fb9784b4b095] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:07.862957  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4e6c469d-b2a5-4ec2-95a4-01b6ad7de347] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:07.862964  301044 system_pods.go:61] "kube-proxy-6hxkb" [42b01d8b-9a37-40d0-ac32-09e3e261f953] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:07.862979  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [2373a650-57bb-4dc3-96ab-7f6cd040c148] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:07.862985  301044 system_pods.go:61] "metrics-server-569cc877fc-dlrjb" [360087fa-273d-4ba8-a299-54678724c45e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:07.862990  301044 system_pods.go:61] "storage-provisioner" [3e3fb5ef-6761-4671-a093-8616241cd98f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:07.862996  301044 system_pods.go:74] duration metric: took 12.833023ms to wait for pod list to return data ...
	I0729 13:38:07.863007  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:07.868359  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:07.868385  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:07.868395  301044 node_conditions.go:105] duration metric: took 5.383164ms to run NodePressure ...
	I0729 13:38:07.868412  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:08.166890  301044 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175546  301044 kubeadm.go:739] kubelet initialised
	I0729 13:38:08.175570  301044 kubeadm.go:740] duration metric: took 8.646638ms waiting for restarted kubelet to initialise ...
	I0729 13:38:08.175588  301044 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.186944  301044 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.194446  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194479  301044 pod_ready.go:81] duration metric: took 7.500494ms for pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.194487  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "coredns-7db6d8ff4d-zllk5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.194495  301044 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.202341  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202366  301044 pod_ready.go:81] duration metric: took 7.863125ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.202380  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.202388  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.209017  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209041  301044 pod_ready.go:81] duration metric: took 6.646309ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.209051  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.209057  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.256503  301044 pod_ready.go:97] node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256530  301044 pod_ready.go:81] duration metric: took 47.465005ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:08.256543  301044 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-972693" hosting pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-972693" has status "Ready":"False"
	I0729 13:38:08.256552  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652875  301044 pod_ready.go:92] pod "kube-proxy-6hxkb" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:08.652901  301044 pod_ready.go:81] duration metric: took 396.340654ms for pod "kube-proxy-6hxkb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:08.652912  301044 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.658352  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:08.411629  300746 node_ready.go:53] node "no-preload-566777" has status "Ready":"False"
	I0729 13:38:08.908602  300746 node_ready.go:49] node "no-preload-566777" has status "Ready":"True"
	I0729 13:38:08.908629  300746 node_ready.go:38] duration metric: took 5.003487604s for node "no-preload-566777" to be "Ready" ...
	I0729 13:38:08.908639  300746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:08.914468  300746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:10.921796  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.313102  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313621  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has current primary IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.313650  301425 main.go:141] libmachine: (old-k8s-version-924039) Found IP for machine: 192.168.39.227
	I0729 13:38:15.313665  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserving static IP address...
	I0729 13:38:15.314120  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.314168  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | skip adding static IP to network mk-old-k8s-version-924039 - found existing host DHCP lease matching {name: "old-k8s-version-924039", mac: "52:54:00:30:f2:7d", ip: "192.168.39.227"}
	I0729 13:38:15.314187  301425 main.go:141] libmachine: (old-k8s-version-924039) Reserved static IP address: 192.168.39.227
	I0729 13:38:15.314205  301425 main.go:141] libmachine: (old-k8s-version-924039) Waiting for SSH to be available...
	I0729 13:38:15.314219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Getting to WaitForSSH function...
	I0729 13:38:15.316468  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316779  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.316827  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.316994  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH client type: external
	I0729 13:38:15.317013  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa (-rw-------)
	I0729 13:38:15.317042  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:15.317054  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | About to run SSH command:
	I0729 13:38:15.317076  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | exit 0
	I0729 13:38:15.444818  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:15.445203  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetConfigRaw
	I0729 13:38:15.445858  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.448296  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.448784  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.448834  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.449028  301425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/config.json ...
	I0729 13:38:15.449208  301425 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:15.449226  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:15.449469  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.451695  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452017  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.452046  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.452210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.452420  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452606  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.452770  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.452945  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.453151  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.453165  301425 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:15.561558  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:15.561590  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.561859  301425 buildroot.go:166] provisioning hostname "old-k8s-version-924039"
	I0729 13:38:15.561887  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.562079  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.564776  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565116  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.565157  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.565286  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.565495  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565669  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.565805  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.565952  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.566129  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.566140  301425 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-924039 && echo "old-k8s-version-924039" | sudo tee /etc/hostname
	I0729 13:38:15.687712  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-924039
	
	I0729 13:38:15.687744  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.690289  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690614  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.690638  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.690864  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.691104  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691290  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.691463  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.691649  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:15.691841  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:15.691869  301425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-924039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-924039/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-924039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:15.814102  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:15.814140  301425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:15.814190  301425 buildroot.go:174] setting up certificates
	I0729 13:38:15.814198  301425 provision.go:84] configureAuth start
	I0729 13:38:15.814210  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetMachineName
	I0729 13:38:15.814521  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:15.817140  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817548  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.817583  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.817728  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.819957  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820307  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.820335  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.820476  301425 provision.go:143] copyHostCerts
	I0729 13:38:15.820529  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:15.820539  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:15.820592  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:15.820685  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:15.820693  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:15.820713  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:15.820772  301425 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:15.820779  301425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:15.820828  301425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:15.820909  301425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-924039 san=[127.0.0.1 192.168.39.227 localhost minikube old-k8s-version-924039]
	I0729 13:38:15.895797  301425 provision.go:177] copyRemoteCerts
	I0729 13:38:15.895866  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:15.895898  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:15.898774  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899173  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:15.899214  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:15.899444  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:15.899672  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:15.899882  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:15.900048  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.606081  300705 start.go:364] duration metric: took 56.40993179s to acquireMachinesLock for "embed-certs-135920"
	I0729 13:38:16.606131  300705 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:38:16.606139  300705 fix.go:54] fixHost starting: 
	I0729 13:38:16.606611  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:16.606652  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:16.626502  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
	I0729 13:38:16.626989  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:16.627491  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:16.627511  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:16.627897  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:16.628100  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:16.628242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:16.629856  300705 fix.go:112] recreateIfNeeded on embed-certs-135920: state=Stopped err=<nil>
	I0729 13:38:16.629879  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	W0729 13:38:16.630046  300705 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:38:16.632177  300705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-135920" ...
	I0729 13:38:12.659133  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.159457  301044 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.159792  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.159818  301044 pod_ready.go:81] duration metric: took 7.506898395s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.159827  301044 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.633625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Start
	I0729 13:38:16.633803  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring networks are active...
	I0729 13:38:16.634580  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network default is active
	I0729 13:38:16.634947  300705 main.go:141] libmachine: (embed-certs-135920) Ensuring network mk-embed-certs-135920 is active
	I0729 13:38:16.635454  300705 main.go:141] libmachine: (embed-certs-135920) Getting domain xml...
	I0729 13:38:16.636201  300705 main.go:141] libmachine: (embed-certs-135920) Creating domain...
	I0729 13:38:15.988091  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:16.019058  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 13:38:16.047266  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:16.072992  301425 provision.go:87] duration metric: took 258.777499ms to configureAuth
	I0729 13:38:16.073029  301425 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:16.073250  301425 config.go:182] Loaded profile config "old-k8s-version-924039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 13:38:16.073338  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.075801  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.076219  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.076350  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.076560  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076750  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.076972  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.077169  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.077354  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.077369  301425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:16.357614  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:16.357650  301425 machine.go:97] duration metric: took 908.424232ms to provisionDockerMachine
	I0729 13:38:16.357666  301425 start.go:293] postStartSetup for "old-k8s-version-924039" (driver="kvm2")
	I0729 13:38:16.357680  301425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:16.357706  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.358060  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:16.358089  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.360841  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361257  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.361314  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.361410  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.361645  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.361821  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.361987  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.448673  301425 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:16.453435  301425 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:16.453461  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:16.453543  301425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:16.453638  301425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:16.453763  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:16.464185  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:16.490358  301425 start.go:296] duration metric: took 132.675687ms for postStartSetup
	I0729 13:38:16.490422  301425 fix.go:56] duration metric: took 23.088507704s for fixHost
	I0729 13:38:16.490450  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.493249  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493571  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.493612  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.493781  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.494046  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494241  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.494388  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.494561  301425 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:16.494759  301425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0729 13:38:16.494769  301425 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:16.605903  301425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260296.583363181
	
	I0729 13:38:16.605930  301425 fix.go:216] guest clock: 1722260296.583363181
	I0729 13:38:16.605940  301425 fix.go:229] Guest: 2024-07-29 13:38:16.583363181 +0000 UTC Remote: 2024-07-29 13:38:16.490427183 +0000 UTC m=+245.556685019 (delta=92.935998ms)
	I0729 13:38:16.605967  301425 fix.go:200] guest clock delta is within tolerance: 92.935998ms
	I0729 13:38:16.605974  301425 start.go:83] releasing machines lock for "old-k8s-version-924039", held for 23.204101255s
	I0729 13:38:16.606006  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.606296  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:16.609324  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609669  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.609701  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.609826  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610328  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610516  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .DriverName
	I0729 13:38:16.610589  301425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:16.610673  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.610758  301425 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:16.610786  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHHostname
	I0729 13:38:16.613356  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613639  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613689  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.613712  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.613910  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614092  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:16.614112  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:16.614122  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614287  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHPort
	I0729 13:38:16.614307  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614449  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.614496  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHKeyPath
	I0729 13:38:16.614635  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetSSHUsername
	I0729 13:38:16.614771  301425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/old-k8s-version-924039/id_rsa Username:docker}
	I0729 13:38:16.719174  301425 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:16.726348  301425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:16.880130  301425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:16.886410  301425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:16.886484  301425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:16.904120  301425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:16.904151  301425 start.go:495] detecting cgroup driver to use...
	I0729 13:38:16.904222  301425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:16.927036  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:16.947380  301425 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:16.947448  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:16.964612  301425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:16.979266  301425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:17.108950  301425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:17.263118  301425 docker.go:233] disabling docker service ...
	I0729 13:38:17.263192  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:17.282563  301425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:17.299473  301425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:17.448598  301425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:17.568025  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:17.583700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:17.603159  301425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 13:38:17.603223  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.615655  301425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:17.615728  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.628639  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.640456  301425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:17.652160  301425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:17.663864  301425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:17.675293  301425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:17.675361  301425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:17.690427  301425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:17.702163  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:17.831401  301425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:17.985760  301425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:17.985851  301425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:17.990740  301425 start.go:563] Will wait 60s for crictl version
	I0729 13:38:17.990798  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:17.994741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:18.035793  301425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:18.035889  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.065036  301425 ssh_runner.go:195] Run: crio --version
	I0729 13:38:18.097441  301425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 13:38:13.421995  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:15.944090  300746 pod_ready.go:102] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:16.933596  300746 pod_ready.go:92] pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.933621  300746 pod_ready.go:81] duration metric: took 8.019124005s for pod "coredns-5cfdc65f69-kkrqd" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.933634  300746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943434  300746 pod_ready.go:92] pod "etcd-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.943465  300746 pod_ready.go:81] duration metric: took 9.816863ms for pod "etcd-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.943478  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952623  300746 pod_ready.go:92] pod "kube-apiserver-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.952644  300746 pod_ready.go:81] duration metric: took 9.157998ms for pod "kube-apiserver-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.952653  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.956989  300746 pod_ready.go:92] pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.957010  300746 pod_ready.go:81] duration metric: took 4.350015ms for pod "kube-controller-manager-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.957023  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962772  300746 pod_ready.go:92] pod "kube-proxy-ql6wf" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:16.962796  300746 pod_ready.go:81] duration metric: took 5.763769ms for pod "kube-proxy-ql6wf" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:16.962807  300746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318604  300746 pod_ready.go:92] pod "kube-scheduler-no-preload-566777" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:17.318632  300746 pod_ready.go:81] duration metric: took 355.816982ms for pod "kube-scheduler-no-preload-566777" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:17.318642  300746 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:18.098840  301425 main.go:141] libmachine: (old-k8s-version-924039) Calling .GetIP
	I0729 13:38:18.102182  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102629  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:f2:7d", ip: ""} in network mk-old-k8s-version-924039: {Iface:virbr1 ExpiryTime:2024-07-29 14:38:05 +0000 UTC Type:0 Mac:52:54:00:30:f2:7d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:old-k8s-version-924039 Clientid:01:52:54:00:30:f2:7d}
	I0729 13:38:18.102665  301425 main.go:141] libmachine: (old-k8s-version-924039) DBG | domain old-k8s-version-924039 has defined IP address 192.168.39.227 and MAC address 52:54:00:30:f2:7d in network mk-old-k8s-version-924039
	I0729 13:38:18.102925  301425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:18.107544  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:18.122039  301425 kubeadm.go:883] updating cluster {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:18.122176  301425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:38:18.122249  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:18.169198  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:18.169279  301425 ssh_runner.go:195] Run: which lz4
	I0729 13:38:18.173861  301425 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:18.178840  301425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:18.178881  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 13:38:19.887360  301425 crio.go:462] duration metric: took 1.713549828s to copy over tarball
	I0729 13:38:19.887450  301425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:18.167033  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:20.168009  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:17.933984  300705 main.go:141] libmachine: (embed-certs-135920) Waiting to get IP...
	I0729 13:38:17.935033  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:17.935595  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:17.935652  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:17.935560  302586 retry.go:31] will retry after 195.331915ms: waiting for machine to come up
	I0729 13:38:18.133074  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.133566  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.133592  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.133513  302586 retry.go:31] will retry after 348.993714ms: waiting for machine to come up
	I0729 13:38:18.484164  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.484746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.484771  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.484703  302586 retry.go:31] will retry after 372.899167ms: waiting for machine to come up
	I0729 13:38:18.859212  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:18.859721  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:18.859746  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:18.859672  302586 retry.go:31] will retry after 415.38859ms: waiting for machine to come up
	I0729 13:38:19.276241  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.276785  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.276816  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.276715  302586 retry.go:31] will retry after 553.262343ms: waiting for machine to come up
	I0729 13:38:19.831475  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:19.831994  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:19.832030  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:19.831949  302586 retry.go:31] will retry after 579.574559ms: waiting for machine to come up
	I0729 13:38:20.412838  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:20.413273  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:20.413302  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:20.413225  302586 retry.go:31] will retry after 908.712618ms: waiting for machine to come up
	I0729 13:38:21.324197  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:21.324824  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:21.324849  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:21.324723  302586 retry.go:31] will retry after 1.4226484s: waiting for machine to come up
	I0729 13:38:19.328753  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:21.330005  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.836067  301425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.948583188s)
	I0729 13:38:22.836104  301425 crio.go:469] duration metric: took 2.948710335s to extract the tarball
	I0729 13:38:22.836114  301425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:22.878370  301425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:22.921339  301425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 13:38:22.921370  301425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 13:38:22.921445  301425 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.921545  301425 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.921547  301425 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 13:38:22.921633  301425 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:22.921475  301425 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.921479  301425 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.921494  301425 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923052  301425 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 13:38:22.923712  301425 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:22.923723  301425 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:22.923733  301425 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:22.923743  301425 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:22.923803  301425 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:22.923923  301425 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:22.923976  301425 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.079335  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.095210  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.096664  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.109172  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.111720  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.114386  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.200545  301425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 13:38:23.200629  301425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.200698  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.203884  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 13:38:23.261424  301425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 13:38:23.261500  301425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.261528  301425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 13:38:23.261561  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.261569  301425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.261610  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.267971  301425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 13:38:23.268018  301425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.268075  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317322  301425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 13:38:23.317369  301425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.317387  301425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 13:38:23.317422  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317441  301425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.317440  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 13:38:23.317489  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317507  301425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 13:38:23.317530  301425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 13:38:23.317551  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 13:38:23.317588  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 13:38:23.317553  301425 ssh_runner.go:195] Run: which crictl
	I0729 13:38:23.317683  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 13:38:23.322770  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 13:38:23.432764  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 13:38:23.432833  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 13:38:23.432877  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 13:38:23.442661  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 13:38:23.442741  301425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 13:38:23.442785  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 13:38:23.442825  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 13:38:23.481401  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 13:38:23.484727  301425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 13:38:24.057020  301425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:24.203622  301425 cache_images.go:92] duration metric: took 1.282232497s to LoadCachedImages
	W0729 13:38:24.203724  301425 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19341-233093/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 13:38:24.203742  301425 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.20.0 crio true true} ...
	I0729 13:38:24.203883  301425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-924039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:24.203996  301425 ssh_runner.go:195] Run: crio config
	I0729 13:38:24.274480  301425 cni.go:84] Creating CNI manager for ""
	I0729 13:38:24.274531  301425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:24.274547  301425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:24.274582  301425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-924039 NodeName:old-k8s-version-924039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 13:38:24.274784  301425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-924039"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:24.274863  301425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 13:38:24.285241  301425 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:24.285333  301425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:24.294677  301425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0729 13:38:24.311572  301425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:24.328768  301425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 13:38:24.346849  301425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:24.351047  301425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:24.364302  301425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:24.502947  301425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:24.524583  301425 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039 for IP: 192.168.39.227
	I0729 13:38:24.524610  301425 certs.go:194] generating shared ca certs ...
	I0729 13:38:24.524626  301425 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:24.524831  301425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:24.524889  301425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:24.524908  301425 certs.go:256] generating profile certs ...
	I0729 13:38:24.525030  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/client.key
	I0729 13:38:24.525090  301425 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key.4e51fa9b
	I0729 13:38:24.525143  301425 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key
	I0729 13:38:24.525300  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:24.525345  301425 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:24.525359  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:24.525390  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:24.525416  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:24.525440  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:24.525495  301425 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:24.526416  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:24.593901  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:24.641443  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:24.679927  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:24.740839  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 13:38:24.779899  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:38:24.814327  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:24.842166  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/old-k8s-version-924039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:38:24.868619  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:24.894053  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:24.921437  301425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:24.947676  301425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:24.966469  301425 ssh_runner.go:195] Run: openssl version
	I0729 13:38:24.972780  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:24.985676  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990293  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.990356  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:24.996523  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:25.007631  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:25.018369  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022779  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.022840  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:25.028471  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:25.039307  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:25.050190  301425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054731  301425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.054799  301425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:25.060568  301425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:25.071531  301425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:25.076195  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:25.082194  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:25.088573  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:25.095625  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:25.101900  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:25.107797  301425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:25.113775  301425 kubeadm.go:392] StartCluster: {Name:old-k8s-version-924039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-924039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:25.113903  301425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:25.113975  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.159804  301425 cri.go:89] found id: ""
	I0729 13:38:25.159887  301425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:25.172248  301425 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:25.172271  301425 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:25.172321  301425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:25.182852  301425 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:25.184249  301425 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-924039" does not appear in /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:25.186246  301425 kubeconfig.go:62] /home/jenkins/minikube-integration/19341-233093/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-924039" cluster setting kubeconfig missing "old-k8s-version-924039" context setting]
	I0729 13:38:25.188334  301425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:25.262355  301425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:25.274019  301425 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0729 13:38:25.274063  301425 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:25.274078  301425 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:25.274141  301425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:25.311295  301425 cri.go:89] found id: ""
	I0729 13:38:25.311365  301425 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:25.330380  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:25.343607  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:25.343651  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:25.343709  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:25.356979  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:25.357048  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:25.370453  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:25.386234  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:25.386308  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:25.403905  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.413906  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:25.414011  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:25.431532  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:25.448250  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:25.448325  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:25.459773  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:25.469841  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:25.584845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:22.667857  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:24.668022  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:22.748882  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:22.749346  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:22.749368  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:22.749292  302586 retry.go:31] will retry after 1.460248931s: waiting for machine to come up
	I0729 13:38:24.212019  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:24.212538  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:24.212567  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:24.212479  302586 retry.go:31] will retry after 1.462429402s: waiting for machine to come up
	I0729 13:38:25.676972  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:25.677407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:25.677429  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:25.677368  302586 retry.go:31] will retry after 2.551129627s: waiting for machine to come up
	I0729 13:38:23.826435  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:25.826981  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.325176  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:26.367294  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.618571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.775377  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:26.860948  301425 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:26.861038  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.361227  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.362003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:28.861172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.361165  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:29.861469  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.361306  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:30.861442  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:27.167961  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:29.667405  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:28.230763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:28.231276  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:28.231299  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:28.231239  302586 retry.go:31] will retry after 2.333059097s: waiting for machine to come up
	I0729 13:38:30.566386  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:30.566786  300705 main.go:141] libmachine: (embed-certs-135920) DBG | unable to find current IP address of domain embed-certs-135920 in network mk-embed-certs-135920
	I0729 13:38:30.566815  300705 main.go:141] libmachine: (embed-certs-135920) DBG | I0729 13:38:30.566733  302586 retry.go:31] will retry after 3.717362174s: waiting for machine to come up
	I0729 13:38:30.326143  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:32.825635  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:31.361866  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:31.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.361776  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.862004  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.361883  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:33.862010  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.362013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:34.861958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.361390  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:35.861465  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:32.165082  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.165674  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.165885  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:34.288242  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288935  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has current primary IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.288968  300705 main.go:141] libmachine: (embed-certs-135920) Found IP for machine: 192.168.72.207
	I0729 13:38:34.288987  300705 main.go:141] libmachine: (embed-certs-135920) Reserving static IP address...
	I0729 13:38:34.289557  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.289586  300705 main.go:141] libmachine: (embed-certs-135920) Reserved static IP address: 192.168.72.207
	I0729 13:38:34.289604  300705 main.go:141] libmachine: (embed-certs-135920) DBG | skip adding static IP to network mk-embed-certs-135920 - found existing host DHCP lease matching {name: "embed-certs-135920", mac: "52:54:00:36:0f:14", ip: "192.168.72.207"}
	I0729 13:38:34.289619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Getting to WaitForSSH function...
	I0729 13:38:34.289635  300705 main.go:141] libmachine: (embed-certs-135920) Waiting for SSH to be available...
	I0729 13:38:34.291951  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292308  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.292340  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.292589  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH client type: external
	I0729 13:38:34.292619  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Using SSH private key: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa (-rw-------)
	I0729 13:38:34.292651  300705 main.go:141] libmachine: (embed-certs-135920) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:38:34.292665  300705 main.go:141] libmachine: (embed-certs-135920) DBG | About to run SSH command:
	I0729 13:38:34.292677  300705 main.go:141] libmachine: (embed-certs-135920) DBG | exit 0
	I0729 13:38:34.417738  300705 main.go:141] libmachine: (embed-certs-135920) DBG | SSH cmd err, output: <nil>: 
	I0729 13:38:34.418128  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetConfigRaw
	I0729 13:38:34.418881  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.421524  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.421875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.421911  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.422113  300705 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/config.json ...
	I0729 13:38:34.422306  300705 machine.go:94] provisionDockerMachine start ...
	I0729 13:38:34.422325  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:34.422544  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.424658  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.425073  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.425167  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.425365  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425575  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.425786  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.425935  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.426155  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.426172  300705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:38:34.529324  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 13:38:34.529354  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529600  300705 buildroot.go:166] provisioning hostname "embed-certs-135920"
	I0729 13:38:34.529625  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.529806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.532564  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.532966  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.533001  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.533274  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.533502  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533701  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.533906  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.534116  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.534339  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.534353  300705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-135920 && echo "embed-certs-135920" | sudo tee /etc/hostname
	I0729 13:38:34.651175  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-135920
	
	I0729 13:38:34.651203  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.653763  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.654085  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.654266  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.654460  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654647  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.654838  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.655024  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:34.655230  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:34.655246  300705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-135920' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-135920/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-135920' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:38:34.769548  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:38:34.769579  300705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19341-233093/.minikube CaCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19341-233093/.minikube}
	I0729 13:38:34.769597  300705 buildroot.go:174] setting up certificates
	I0729 13:38:34.769605  300705 provision.go:84] configureAuth start
	I0729 13:38:34.769613  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetMachineName
	I0729 13:38:34.769910  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:34.772513  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.772833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.772859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.773005  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.775133  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775480  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.775506  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.775607  300705 provision.go:143] copyHostCerts
	I0729 13:38:34.775671  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem, removing ...
	I0729 13:38:34.775681  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem
	I0729 13:38:34.775738  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/ca.pem (1078 bytes)
	I0729 13:38:34.775828  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem, removing ...
	I0729 13:38:34.775836  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem
	I0729 13:38:34.775855  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/cert.pem (1123 bytes)
	I0729 13:38:34.775909  300705 exec_runner.go:144] found /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem, removing ...
	I0729 13:38:34.775916  300705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem
	I0729 13:38:34.775932  300705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19341-233093/.minikube/key.pem (1679 bytes)
	I0729 13:38:34.775981  300705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem org=jenkins.embed-certs-135920 san=[127.0.0.1 192.168.72.207 embed-certs-135920 localhost minikube]
	I0729 13:38:34.901161  300705 provision.go:177] copyRemoteCerts
	I0729 13:38:34.901230  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:38:34.901258  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:34.903730  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904038  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:34.904060  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:34.904245  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:34.904428  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:34.904606  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:34.904726  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:34.986647  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:38:35.010406  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:38:35.033884  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:38:35.057289  300705 provision.go:87] duration metric: took 287.670762ms to configureAuth
	I0729 13:38:35.057318  300705 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:38:35.057521  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:35.057621  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.060303  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060634  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.060667  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.060840  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.061053  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061259  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.061433  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.061599  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.061775  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.061792  300705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:38:35.344890  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:38:35.344923  300705 machine.go:97] duration metric: took 922.603779ms to provisionDockerMachine
	I0729 13:38:35.344936  300705 start.go:293] postStartSetup for "embed-certs-135920" (driver="kvm2")
	I0729 13:38:35.344947  300705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:38:35.344964  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.345304  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:38:35.345341  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.348029  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348420  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.348458  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.348612  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.348832  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.348981  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.349112  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.431975  300705 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:38:35.436416  300705 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:38:35.436441  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/addons for local assets ...
	I0729 13:38:35.436522  300705 filesync.go:126] Scanning /home/jenkins/minikube-integration/19341-233093/.minikube/files for local assets ...
	I0729 13:38:35.436621  300705 filesync.go:149] local asset: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem -> 2403402.pem in /etc/ssl/certs
	I0729 13:38:35.436767  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:38:35.446166  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:35.473466  300705 start.go:296] duration metric: took 128.511199ms for postStartSetup
	I0729 13:38:35.473513  300705 fix.go:56] duration metric: took 18.867373858s for fixHost
	I0729 13:38:35.473540  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.476118  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476477  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.476504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.476672  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.476877  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477093  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.477241  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.477468  300705 main.go:141] libmachine: Using SSH client type: native
	I0729 13:38:35.477642  300705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.207 22 <nil> <nil>}
	I0729 13:38:35.477652  300705 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:38:35.577853  300705 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260315.546644144
	
	I0729 13:38:35.577882  300705 fix.go:216] guest clock: 1722260315.546644144
	I0729 13:38:35.577892  300705 fix.go:229] Guest: 2024-07-29 13:38:35.546644144 +0000 UTC Remote: 2024-07-29 13:38:35.473518121 +0000 UTC m=+357.868969453 (delta=73.126023ms)
	I0729 13:38:35.577919  300705 fix.go:200] guest clock delta is within tolerance: 73.126023ms
	I0729 13:38:35.577926  300705 start.go:83] releasing machines lock for "embed-certs-135920", held for 18.971820448s
	I0729 13:38:35.577950  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.578260  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:35.581109  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581474  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.581507  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.581707  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582287  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582451  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:35.582562  300705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:38:35.582616  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.582645  300705 ssh_runner.go:195] Run: cat /version.json
	I0729 13:38:35.582673  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:35.585527  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585555  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.585989  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586021  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586062  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:35.586084  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:35.586171  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586351  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:35.586360  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586573  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:35.586582  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586795  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:35.586838  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.586942  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:35.686359  300705 ssh_runner.go:195] Run: systemctl --version
	I0729 13:38:35.692726  300705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:38:35.838487  300705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:38:35.844313  300705 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:38:35.844416  300705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:38:35.861079  300705 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:38:35.861103  300705 start.go:495] detecting cgroup driver to use...
	I0729 13:38:35.861178  300705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:38:35.880678  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:38:35.897996  300705 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:38:35.898070  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:38:35.915337  300705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:38:35.930990  300705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:38:36.039923  300705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:38:36.198255  300705 docker.go:233] disabling docker service ...
	I0729 13:38:36.198340  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:38:36.213373  300705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:38:36.227364  300705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:38:36.351279  300705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:38:36.468325  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:38:36.483692  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:38:36.503872  300705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:38:36.503945  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.515397  300705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:38:36.515502  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.527170  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.538668  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.550013  300705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:38:36.561402  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.573747  300705 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.594158  300705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:38:36.606047  300705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:38:36.616858  300705 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:38:36.616961  300705 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:38:36.633281  300705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:38:36.644423  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:36.779934  300705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:38:36.924394  300705 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:38:36.924483  300705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:38:36.929889  300705 start.go:563] Will wait 60s for crictl version
	I0729 13:38:36.929935  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:38:36.933671  300705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:38:36.973428  300705 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:38:36.973506  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.002245  300705 ssh_runner.go:195] Run: crio --version
	I0729 13:38:37.034982  300705 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:38:37.036162  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetIP
	I0729 13:38:37.039092  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039504  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:37.039533  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:37.039697  300705 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 13:38:37.044028  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:37.057278  300705 kubeadm.go:883] updating cluster {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:38:37.057398  300705 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:38:37.057504  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:37.096111  300705 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:38:37.096205  300705 ssh_runner.go:195] Run: which lz4
	I0729 13:38:37.100600  300705 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:38:37.104942  300705 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:38:37.104974  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:38:35.325849  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:37.326770  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:36.362042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:36.862022  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.361208  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:37.862020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.362115  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.861360  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.362077  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:39.861478  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.361278  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:40.861920  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:38.167072  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:40.667067  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:38.548671  300705 crio.go:462] duration metric: took 1.448103052s to copy over tarball
	I0729 13:38:38.548764  300705 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:38:40.801144  300705 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.252337742s)
	I0729 13:38:40.801177  300705 crio.go:469] duration metric: took 2.252468783s to extract the tarball
	I0729 13:38:40.801185  300705 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:38:40.840132  300705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:38:40.887424  300705 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:38:40.887447  300705 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:38:40.887456  300705 kubeadm.go:934] updating node { 192.168.72.207 8443 v1.30.3 crio true true} ...
	I0729 13:38:40.887583  300705 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-135920 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:38:40.887661  300705 ssh_runner.go:195] Run: crio config
	I0729 13:38:40.943732  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:40.943759  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:40.943771  300705 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:38:40.943801  300705 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.207 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-135920 NodeName:embed-certs-135920 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:38:40.943967  300705 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-135920"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:38:40.944048  300705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:38:40.954284  300705 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:38:40.954354  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:38:40.963877  300705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 13:38:40.981828  300705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:38:40.999273  300705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 13:38:41.016590  300705 ssh_runner.go:195] Run: grep 192.168.72.207	control-plane.minikube.internal$ /etc/hosts
	I0729 13:38:41.020149  300705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:38:41.031970  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:41.163779  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:41.181723  300705 certs.go:68] Setting up /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920 for IP: 192.168.72.207
	I0729 13:38:41.181746  300705 certs.go:194] generating shared ca certs ...
	I0729 13:38:41.181764  300705 certs.go:226] acquiring lock for ca certs: {Name:mk35442f684e402d7c9b18ad971a254e9ba86fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:41.181989  300705 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key
	I0729 13:38:41.182053  300705 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key
	I0729 13:38:41.182067  300705 certs.go:256] generating profile certs ...
	I0729 13:38:41.182191  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/client.key
	I0729 13:38:41.182257  300705 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key.45ab1b35
	I0729 13:38:41.182306  300705 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key
	I0729 13:38:41.182454  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem (1338 bytes)
	W0729 13:38:41.182501  300705 certs.go:480] ignoring /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340_empty.pem, impossibly tiny 0 bytes
	I0729 13:38:41.182517  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:38:41.182553  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:38:41.182583  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:38:41.182607  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/certs/key.pem (1679 bytes)
	I0729 13:38:41.182647  300705 certs.go:484] found cert: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem (1708 bytes)
	I0729 13:38:41.183522  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:38:41.239170  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:38:41.278086  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:38:41.318584  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 13:38:41.351639  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 13:38:41.389242  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:38:41.414897  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:38:41.439178  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/embed-certs-135920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:38:41.464278  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:38:41.488391  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/certs/240340.pem --> /usr/share/ca-certificates/240340.pem (1338 bytes)
	I0729 13:38:41.515271  300705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/ssl/certs/2403402.pem --> /usr/share/ca-certificates/2403402.pem (1708 bytes)
	I0729 13:38:41.539904  300705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:38:41.557036  300705 ssh_runner.go:195] Run: openssl version
	I0729 13:38:41.562935  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:38:41.580782  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585603  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.585670  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:38:41.591504  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:38:41.602129  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/240340.pem && ln -fs /usr/share/ca-certificates/240340.pem /etc/ssl/certs/240340.pem"
	I0729 13:38:41.612441  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616813  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 12:16 /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.616866  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/240340.pem
	I0729 13:38:41.622328  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/240340.pem /etc/ssl/certs/51391683.0"
	I0729 13:38:41.633108  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2403402.pem && ln -fs /usr/share/ca-certificates/2403402.pem /etc/ssl/certs/2403402.pem"
	I0729 13:38:41.643897  300705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648369  300705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 12:16 /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.648415  300705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2403402.pem
	I0729 13:38:41.654085  300705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2403402.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:38:41.665037  300705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:38:41.670067  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:38:41.676340  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:38:41.682386  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:38:41.688809  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:38:41.694957  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:38:41.700469  300705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:38:41.706471  300705 kubeadm.go:392] StartCluster: {Name:embed-certs-135920 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-135920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:38:41.706561  300705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:38:41.706617  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.746623  300705 cri.go:89] found id: ""
	I0729 13:38:41.746703  300705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:38:41.757101  300705 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 13:38:41.757121  300705 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 13:38:41.757174  300705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 13:38:41.766817  300705 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:38:41.767837  300705 kubeconfig.go:125] found "embed-certs-135920" server: "https://192.168.72.207:8443"
	I0729 13:38:41.770191  300705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 13:38:41.779930  300705 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.207
	I0729 13:38:41.779961  300705 kubeadm.go:1160] stopping kube-system containers ...
	I0729 13:38:41.779976  300705 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 13:38:41.780030  300705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:38:41.816273  300705 cri.go:89] found id: ""
	I0729 13:38:41.816350  300705 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 13:38:41.836512  300705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:38:41.847230  300705 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:38:41.847249  300705 kubeadm.go:157] found existing configuration files:
	
	I0729 13:38:41.847297  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:38:41.856215  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:38:41.856262  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:38:41.866646  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:38:41.876656  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:38:41.876723  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:38:41.886810  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.895693  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:38:41.895755  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:38:41.904774  300705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:38:41.915232  300705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:38:41.915301  300705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:38:41.924961  300705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:38:41.937051  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:42.059359  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:39.329415  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.826891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:41.361613  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:41.861155  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.361524  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:42.862047  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.361778  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.862055  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.861737  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.361194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:45.862019  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.326814  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:45.666203  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:42.934386  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.142119  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.221754  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:43.346345  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:38:43.346451  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:43.847275  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.347551  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:44.391680  300705 api_server.go:72] duration metric: took 1.045336573s to wait for apiserver process to appear ...
	I0729 13:38:44.391709  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:38:44.391735  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:44.392354  300705 api_server.go:269] stopped: https://192.168.72.207:8443/healthz: Get "https://192.168.72.207:8443/healthz": dial tcp 192.168.72.207:8443: connect: connection refused
	I0729 13:38:44.892773  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.149059  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.149101  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.149128  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.161645  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 13:38:47.161672  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 13:38:47.391878  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.396499  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.396527  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:47.892015  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:47.897406  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 13:38:47.897436  300705 api_server.go:103] status: https://192.168.72.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 13:38:48.391867  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:38:48.395941  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:38:48.401926  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:38:48.401951  300705 api_server.go:131] duration metric: took 4.010234721s to wait for apiserver health ...
	I0729 13:38:48.401962  300705 cni.go:84] Creating CNI manager for ""
	I0729 13:38:48.401970  300705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:38:48.403912  300705 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:38:44.073092  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:46.327011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:48.405332  300705 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:38:48.416550  300705 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:38:48.439881  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:38:48.452435  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:38:48.452477  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 13:38:48.452527  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 13:38:48.452544  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 13:38:48.452556  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 13:38:48.452575  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 13:38:48.452584  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 13:38:48.452594  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:38:48.452604  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 13:38:48.452617  300705 system_pods.go:74] duration metric: took 12.710662ms to wait for pod list to return data ...
	I0729 13:38:48.452629  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:38:48.455453  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:38:48.455484  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:38:48.455497  300705 node_conditions.go:105] duration metric: took 2.858433ms to run NodePressure ...
	I0729 13:38:48.455518  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 13:38:48.791507  300705 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796191  300705 kubeadm.go:739] kubelet initialised
	I0729 13:38:48.796213  300705 kubeadm.go:740] duration metric: took 4.674843ms waiting for restarted kubelet to initialise ...
	I0729 13:38:48.796222  300705 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:48.802395  300705 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.807224  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807247  300705 pod_ready.go:81] duration metric: took 4.825485ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.807263  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.807269  300705 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.812485  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812516  300705 pod_ready.go:81] duration metric: took 5.235923ms for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.812529  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "etcd-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.812536  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.817345  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817374  300705 pod_ready.go:81] duration metric: took 4.827847ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.817383  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.817390  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:48.843709  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843754  300705 pod_ready.go:81] duration metric: took 26.35618ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:48.843775  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.843783  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.243226  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243257  300705 pod_ready.go:81] duration metric: took 399.464753ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.243269  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-proxy-sn8bc" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.243278  300705 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:49.643370  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643399  300705 pod_ready.go:81] duration metric: took 400.112533ms for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:49.643410  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:49.643416  300705 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:50.044089  300705 pod_ready.go:97] node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044119  300705 pod_ready.go:81] duration metric: took 400.694081ms for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:38:50.044128  300705 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-135920" hosting pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:50.044135  300705 pod_ready.go:38] duration metric: took 1.247904039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:50.044153  300705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:38:50.055730  300705 ops.go:34] apiserver oom_adj: -16
	I0729 13:38:50.055755  300705 kubeadm.go:597] duration metric: took 8.298625813s to restartPrimaryControlPlane
	I0729 13:38:50.055765  300705 kubeadm.go:394] duration metric: took 8.349303256s to StartCluster
	I0729 13:38:50.055785  300705 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.055869  300705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:38:50.057734  300705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:38:50.058013  300705 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:38:50.058092  300705 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:38:50.058165  300705 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-135920"
	I0729 13:38:50.058216  300705 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-135920"
	W0729 13:38:50.058230  300705 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:38:50.058217  300705 addons.go:69] Setting default-storageclass=true in profile "embed-certs-135920"
	I0729 13:38:50.058244  300705 addons.go:69] Setting metrics-server=true in profile "embed-certs-135920"
	I0729 13:38:50.058268  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058270  300705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-135920"
	I0729 13:38:50.058297  300705 config.go:182] Loaded profile config "embed-certs-135920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:38:50.058305  300705 addons.go:234] Setting addon metrics-server=true in "embed-certs-135920"
	W0729 13:38:50.058350  300705 addons.go:243] addon metrics-server should already be in state true
	I0729 13:38:50.058416  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.058719  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058746  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058763  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058766  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.058732  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.058835  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.061029  300705 out.go:177] * Verifying Kubernetes components...
	I0729 13:38:50.062610  300705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:38:50.074642  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0729 13:38:50.074661  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0729 13:38:50.075119  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0729 13:38:50.075217  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075310  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075570  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.075833  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.075856  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076049  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076066  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076273  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076367  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.076393  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.076434  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.076620  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.076863  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.076912  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.076959  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.077488  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.077519  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.080392  300705 addons.go:234] Setting addon default-storageclass=true in "embed-certs-135920"
	W0729 13:38:50.080419  300705 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:38:50.080458  300705 host.go:66] Checking if "embed-certs-135920" exists ...
	I0729 13:38:50.080872  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.080914  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.093352  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I0729 13:38:50.093981  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.094704  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.094742  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.095201  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.095452  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.095863  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0729 13:38:50.096287  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096506  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0729 13:38:50.096945  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.096974  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.096991  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.097343  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.097408  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.097508  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.097529  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.099585  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.099600  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.099936  300705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:38:50.100730  300705 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:38:50.100765  300705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:38:50.101377  300705 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.101399  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:38:50.101424  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.101563  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.103218  300705 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:38:46.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:46.862046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.362045  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:47.862042  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.361183  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:48.862026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.361204  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:49.861490  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.361635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.861519  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:50.104927  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:38:50.104948  300705 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:38:50.104971  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.105309  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106036  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.106207  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.106369  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.106615  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.106716  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.106817  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.108316  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108833  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.108859  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.108908  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.109081  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.109240  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.109354  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.119251  300705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0729 13:38:50.119703  300705 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:38:50.120206  300705 main.go:141] libmachine: Using API Version  1
	I0729 13:38:50.120235  300705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:38:50.120620  300705 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:38:50.120813  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetState
	I0729 13:38:50.122685  300705 main.go:141] libmachine: (embed-certs-135920) Calling .DriverName
	I0729 13:38:50.122898  300705 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.122910  300705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:38:50.122923  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHHostname
	I0729 13:38:50.125412  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.125875  300705 main.go:141] libmachine: (embed-certs-135920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:0f:14", ip: ""} in network mk-embed-certs-135920: {Iface:virbr4 ExpiryTime:2024-07-29 14:38:28 +0000 UTC Type:0 Mac:52:54:00:36:0f:14 Iaid: IPaddr:192.168.72.207 Prefix:24 Hostname:embed-certs-135920 Clientid:01:52:54:00:36:0f:14}
	I0729 13:38:50.125914  300705 main.go:141] libmachine: (embed-certs-135920) DBG | domain embed-certs-135920 has defined IP address 192.168.72.207 and MAC address 52:54:00:36:0f:14 in network mk-embed-certs-135920
	I0729 13:38:50.126140  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHPort
	I0729 13:38:50.126321  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHKeyPath
	I0729 13:38:50.126448  300705 main.go:141] libmachine: (embed-certs-135920) Calling .GetSSHUsername
	I0729 13:38:50.126566  300705 sshutil.go:53] new ssh client: &{IP:192.168.72.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/embed-certs-135920/id_rsa Username:docker}
	I0729 13:38:50.254664  300705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:38:50.276352  300705 node_ready.go:35] waiting up to 6m0s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:50.328315  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:38:50.412968  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:38:50.459653  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:38:50.459697  300705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:38:50.513203  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:38:50.513237  300705 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:38:50.576439  300705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.576469  300705 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:38:50.611994  300705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:38:50.701214  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701242  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701569  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.701636  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701647  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701657  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.701663  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.701909  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.701936  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.701939  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:50.707113  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:50.707130  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:50.707390  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:50.707407  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:50.707407  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.625719  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212712139s)
	I0729 13:38:51.625766  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.625778  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626066  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.626109  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626117  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.626135  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.626143  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.626412  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.626430  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662030  300705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.049982518s)
	I0729 13:38:51.662094  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662110  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.662391  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.662759  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.662781  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.662798  300705 main.go:141] libmachine: Making call to close driver server
	I0729 13:38:51.662806  300705 main.go:141] libmachine: (embed-certs-135920) Calling .Close
	I0729 13:38:51.663076  300705 main.go:141] libmachine: (embed-certs-135920) DBG | Closing plugin on server side
	I0729 13:38:51.663117  300705 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:38:51.663126  300705 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:38:51.663138  300705 addons.go:475] Verifying addon metrics-server=true in "embed-certs-135920"
	I0729 13:38:51.666005  300705 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 13:38:47.666568  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.167349  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.667365  300705 addons.go:510] duration metric: took 1.609276005s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 13:38:52.280219  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:48.826113  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:50.826826  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:53.327720  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:51.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:51.861510  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.362026  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.861182  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.361850  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:53.861931  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.362035  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:54.861192  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.361173  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:55.862018  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:52.665875  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.666184  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:54.779805  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:56.780550  300705 node_ready.go:53] node "embed-certs-135920" has status "Ready":"False"
	I0729 13:38:55.826349  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:58.326186  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:56.361740  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:56.862033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.362084  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.861406  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:58.861194  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.361788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:59.861962  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.362043  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:00.862000  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:38:57.166551  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:59.167246  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.666773  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:38:57.780677  300705 node_ready.go:49] node "embed-certs-135920" has status "Ready":"True"
	I0729 13:38:57.780700  300705 node_ready.go:38] duration metric: took 7.504317897s for node "embed-certs-135920" to be "Ready" ...
	I0729 13:38:57.780709  300705 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:38:57.786299  300705 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791107  300705 pod_ready.go:92] pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace has status "Ready":"True"
	I0729 13:38:57.791132  300705 pod_ready.go:81] duration metric: took 4.805712ms for pod "coredns-7db6d8ff4d-rgh5d" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:57.791143  300705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:38:59.806437  300705 pod_ready.go:102] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:00.296725  300705 pod_ready.go:92] pod "etcd-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.296772  300705 pod_ready.go:81] duration metric: took 2.505622037s for pod "etcd-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.296782  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302450  300705 pod_ready.go:92] pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.302471  300705 pod_ready.go:81] duration metric: took 5.680644ms for pod "kube-apiserver-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.302482  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306734  300705 pod_ready.go:92] pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.306753  300705 pod_ready.go:81] duration metric: took 4.264085ms for pod "kube-controller-manager-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.306762  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311745  300705 pod_ready.go:92] pod "kube-proxy-sn8bc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:00.311763  300705 pod_ready.go:81] duration metric: took 4.990061ms for pod "kube-proxy-sn8bc" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.311773  300705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817465  300705 pod_ready.go:92] pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace has status "Ready":"True"
	I0729 13:39:01.817489  300705 pod_ready.go:81] duration metric: took 1.50570948s for pod "kube-scheduler-embed-certs-135920" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:01.817499  300705 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	I0729 13:39:00.825911  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.325485  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:01.362213  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:01.861107  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.361767  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:02.861151  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.361607  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.862013  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.362032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:04.861858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.361611  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:05.862037  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:03.667047  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.166825  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:03.826817  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.326374  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:05.325891  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:07.326167  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:06.362002  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:06.861635  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.361659  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:07.862061  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.361999  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.862083  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.361356  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:09.861763  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.361420  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:10.861822  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:08.666165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:10.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:08.824692  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.324207  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:09.326609  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.826082  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:11.362046  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:11.861909  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.362020  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:12.861834  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.361461  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.861666  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.361997  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:14.861830  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.361141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:15.862003  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:13.167800  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.665790  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:13.325286  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:15.826111  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:14.327217  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.826625  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:16.361731  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:16.862014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.361702  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.862141  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.361808  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:18.861144  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.361104  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:19.861123  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.361276  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:20.861176  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:17.666780  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.165629  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:18.328096  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:20.824426  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:19.326628  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.825705  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:21.362052  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:21.861150  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.361802  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.861996  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.362106  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:23.861135  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.361998  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:24.862048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.361848  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:25.861813  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:22.666434  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.666549  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:22.824988  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:24.825210  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.825579  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:23.826380  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:25.826544  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:27.826988  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:26.362048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:26.861651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:26.861733  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:26.904275  301425 cri.go:89] found id: ""
	I0729 13:39:26.904307  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.904315  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:26.904322  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:26.904387  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:26.946925  301425 cri.go:89] found id: ""
	I0729 13:39:26.946954  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.946966  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:26.946973  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:26.947036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:26.979236  301425 cri.go:89] found id: ""
	I0729 13:39:26.979267  301425 logs.go:276] 0 containers: []
	W0729 13:39:26.979276  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:26.979282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:26.979330  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:27.022185  301425 cri.go:89] found id: ""
	I0729 13:39:27.022212  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.022220  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:27.022226  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:27.022277  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:27.055228  301425 cri.go:89] found id: ""
	I0729 13:39:27.055256  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.055266  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:27.055274  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:27.055335  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:27.088885  301425 cri.go:89] found id: ""
	I0729 13:39:27.088918  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.088926  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:27.088933  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:27.088986  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:27.123861  301425 cri.go:89] found id: ""
	I0729 13:39:27.123893  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.123902  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:27.123915  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:27.123967  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:27.157921  301425 cri.go:89] found id: ""
	I0729 13:39:27.157956  301425 logs.go:276] 0 containers: []
	W0729 13:39:27.157964  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:27.157988  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:27.158003  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.222447  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:27.222489  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:27.265646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:27.265680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:27.317344  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:27.317388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:27.333664  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:27.333689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:27.460502  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:29.960703  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:29.974159  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:29.974235  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:30.009701  301425 cri.go:89] found id: ""
	I0729 13:39:30.009740  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.009753  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:30.009761  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:30.009822  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:30.045806  301425 cri.go:89] found id: ""
	I0729 13:39:30.045841  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.045853  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:30.045860  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:30.045924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:30.078709  301425 cri.go:89] found id: ""
	I0729 13:39:30.078738  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.078747  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:30.078753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:30.078808  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:30.112884  301425 cri.go:89] found id: ""
	I0729 13:39:30.112920  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.112932  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:30.112943  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:30.113012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:30.148160  301425 cri.go:89] found id: ""
	I0729 13:39:30.148196  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.148208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:30.148217  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:30.148285  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:30.186939  301425 cri.go:89] found id: ""
	I0729 13:39:30.186967  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.186975  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:30.186981  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:30.187039  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:30.241888  301425 cri.go:89] found id: ""
	I0729 13:39:30.241915  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.241926  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:30.241934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:30.242009  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:30.281482  301425 cri.go:89] found id: ""
	I0729 13:39:30.281510  301425 logs.go:276] 0 containers: []
	W0729 13:39:30.281518  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:30.281527  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:30.281540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:30.321688  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:30.321730  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:30.378464  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:30.378508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:30.394109  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:30.394150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:30.474077  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:30.474101  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:30.474118  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:27.166322  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.166623  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.666142  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:29.323534  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:31.324750  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:30.327219  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:32.826011  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.046016  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:33.059705  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:33.059795  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:33.096521  301425 cri.go:89] found id: ""
	I0729 13:39:33.096549  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.096557  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:33.096564  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:33.096621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:33.131262  301425 cri.go:89] found id: ""
	I0729 13:39:33.131295  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.131307  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:33.131314  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:33.131378  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:33.168889  301425 cri.go:89] found id: ""
	I0729 13:39:33.168915  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.168925  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:33.168932  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:33.168994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:33.205513  301425 cri.go:89] found id: ""
	I0729 13:39:33.205547  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.205558  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:33.205567  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:33.205644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:33.247051  301425 cri.go:89] found id: ""
	I0729 13:39:33.247079  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.247087  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:33.247093  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:33.247149  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:33.279541  301425 cri.go:89] found id: ""
	I0729 13:39:33.279575  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.279587  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:33.279596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:33.279659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:33.314000  301425 cri.go:89] found id: ""
	I0729 13:39:33.314034  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.314046  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:33.314054  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:33.314117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:33.351363  301425 cri.go:89] found id: ""
	I0729 13:39:33.351390  301425 logs.go:276] 0 containers: []
	W0729 13:39:33.351401  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:33.351412  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:33.351437  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:33.413509  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:33.413547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:33.428128  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:33.428165  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:33.495430  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:33.495461  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:33.495478  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:33.574060  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:33.574098  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:34.166133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.167919  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:33.823668  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.824684  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:35.326216  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826516  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:36.113561  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:36.126899  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:36.126965  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:36.163363  301425 cri.go:89] found id: ""
	I0729 13:39:36.163396  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.163407  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:36.163414  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:36.163473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:36.205215  301425 cri.go:89] found id: ""
	I0729 13:39:36.205243  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.205259  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:36.205267  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:36.205331  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:36.243166  301425 cri.go:89] found id: ""
	I0729 13:39:36.243220  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.243231  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:36.243239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:36.243295  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:36.280804  301425 cri.go:89] found id: ""
	I0729 13:39:36.280836  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.280845  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:36.280852  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:36.280903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:36.317291  301425 cri.go:89] found id: ""
	I0729 13:39:36.317320  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.317330  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:36.317337  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:36.317399  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:36.358111  301425 cri.go:89] found id: ""
	I0729 13:39:36.358145  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.358156  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:36.358164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:36.358229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:36.399407  301425 cri.go:89] found id: ""
	I0729 13:39:36.399440  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.399451  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:36.399459  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:36.399525  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:36.437876  301425 cri.go:89] found id: ""
	I0729 13:39:36.437904  301425 logs.go:276] 0 containers: []
	W0729 13:39:36.437914  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:36.437926  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:36.437942  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:36.514464  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:36.514493  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:36.514511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:36.592036  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:36.592083  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:36.647650  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:36.647691  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:36.706890  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:36.706935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.226070  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:39.239313  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:39.239373  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:39.274158  301425 cri.go:89] found id: ""
	I0729 13:39:39.274191  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.274202  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:39.274210  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:39.274286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:39.308448  301425 cri.go:89] found id: ""
	I0729 13:39:39.308484  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.308492  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:39.308499  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:39.308563  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:39.347745  301425 cri.go:89] found id: ""
	I0729 13:39:39.347782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.347791  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:39.347798  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:39.347856  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:39.380649  301425 cri.go:89] found id: ""
	I0729 13:39:39.380679  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.380688  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:39.380696  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:39.380767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:39.415076  301425 cri.go:89] found id: ""
	I0729 13:39:39.415107  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.415115  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:39.415120  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:39.415170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:39.450749  301425 cri.go:89] found id: ""
	I0729 13:39:39.450782  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.450793  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:39.450801  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:39.450864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:39.482148  301425 cri.go:89] found id: ""
	I0729 13:39:39.482175  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.482184  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:39.482190  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:39.482239  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:39.518558  301425 cri.go:89] found id: ""
	I0729 13:39:39.518588  301425 logs.go:276] 0 containers: []
	W0729 13:39:39.518597  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:39.518608  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:39.518622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:39.555753  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:39.555786  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:39.606627  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:39.606661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:39.620359  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:39.620388  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:39.690685  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:39.690711  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:39.690728  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:38.665446  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.666445  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:37.826801  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:40.325166  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:39.827390  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.326038  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.271925  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:42.284365  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:42.284447  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:42.318966  301425 cri.go:89] found id: ""
	I0729 13:39:42.318998  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.319020  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:42.319028  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:42.319111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:42.354811  301425 cri.go:89] found id: ""
	I0729 13:39:42.354840  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.354854  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:42.354862  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:42.354917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:42.402524  301425 cri.go:89] found id: ""
	I0729 13:39:42.402557  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.402569  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:42.402577  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:42.402643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:42.460954  301425 cri.go:89] found id: ""
	I0729 13:39:42.460984  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.461001  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:42.461010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:42.461063  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:42.516849  301425 cri.go:89] found id: ""
	I0729 13:39:42.516880  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.516890  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:42.516898  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:42.516963  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:42.560289  301425 cri.go:89] found id: ""
	I0729 13:39:42.560316  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.560325  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:42.560332  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:42.560397  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:42.597798  301425 cri.go:89] found id: ""
	I0729 13:39:42.597829  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.597839  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:42.597847  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:42.597912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:42.633015  301425 cri.go:89] found id: ""
	I0729 13:39:42.633043  301425 logs.go:276] 0 containers: []
	W0729 13:39:42.633059  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:42.633068  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:42.633080  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:42.711103  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:42.711126  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:42.711141  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:42.787459  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:42.787499  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:42.828965  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:42.829002  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:42.881702  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:42.881740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:45.396462  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:45.410766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:45.410859  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:45.445886  301425 cri.go:89] found id: ""
	I0729 13:39:45.445931  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.445943  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:45.445960  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:45.446023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:45.484293  301425 cri.go:89] found id: ""
	I0729 13:39:45.484326  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.484338  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:45.484346  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:45.484410  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:45.520209  301425 cri.go:89] found id: ""
	I0729 13:39:45.520237  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.520246  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:45.520252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:45.520300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:45.555671  301425 cri.go:89] found id: ""
	I0729 13:39:45.555702  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.555711  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:45.555717  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:45.555767  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:45.594578  301425 cri.go:89] found id: ""
	I0729 13:39:45.594609  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.594618  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:45.594624  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:45.594685  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:45.631777  301425 cri.go:89] found id: ""
	I0729 13:39:45.631805  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.631817  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:45.631825  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:45.631881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:45.667163  301425 cri.go:89] found id: ""
	I0729 13:39:45.667189  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.667197  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:45.667203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:45.667258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:45.703393  301425 cri.go:89] found id: ""
	I0729 13:39:45.703434  301425 logs.go:276] 0 containers: []
	W0729 13:39:45.703443  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:45.703454  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:45.703488  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:45.774424  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:45.774452  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:45.774472  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:45.857529  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:45.857586  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:45.899737  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:45.899775  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:45.952640  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:45.952685  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:42.666728  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.165982  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:42.825543  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:45.323544  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:47.323595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:44.825237  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:46.825276  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.467705  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:48.482292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:48.482380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:48.520146  301425 cri.go:89] found id: ""
	I0729 13:39:48.520181  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.520195  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:48.520204  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:48.520282  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:48.552623  301425 cri.go:89] found id: ""
	I0729 13:39:48.552654  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.552665  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:48.552672  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:48.552734  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:48.587254  301425 cri.go:89] found id: ""
	I0729 13:39:48.587290  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.587303  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:48.587309  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:48.587368  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:48.621045  301425 cri.go:89] found id: ""
	I0729 13:39:48.621076  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.621088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:48.621096  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:48.621160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:48.654117  301425 cri.go:89] found id: ""
	I0729 13:39:48.654151  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.654163  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:48.654171  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:48.654236  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:48.693108  301425 cri.go:89] found id: ""
	I0729 13:39:48.693149  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.693166  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:48.693173  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:48.693225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:48.733000  301425 cri.go:89] found id: ""
	I0729 13:39:48.733025  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.733033  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:48.733039  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:48.733088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:48.773761  301425 cri.go:89] found id: ""
	I0729 13:39:48.773789  301425 logs.go:276] 0 containers: []
	W0729 13:39:48.773798  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:48.773807  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:48.773822  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:48.826655  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:48.826683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:48.840335  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:48.840364  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:48.913727  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:48.913754  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:48.913774  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:48.990196  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:48.990235  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:47.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.167105  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.667165  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:49.324027  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.324146  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:48.825859  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.326299  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:51.533333  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:51.547115  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:51.547175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:51.583247  301425 cri.go:89] found id: ""
	I0729 13:39:51.583284  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.583292  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:51.583297  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:51.583350  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:51.618925  301425 cri.go:89] found id: ""
	I0729 13:39:51.618958  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.618969  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:51.618977  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:51.619036  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:51.657099  301425 cri.go:89] found id: ""
	I0729 13:39:51.657132  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.657144  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:51.657151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:51.657210  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:51.695413  301425 cri.go:89] found id: ""
	I0729 13:39:51.695459  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.695471  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:51.695480  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:51.695553  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:51.731153  301425 cri.go:89] found id: ""
	I0729 13:39:51.731186  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.731198  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:51.731206  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:51.731271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:51.765662  301425 cri.go:89] found id: ""
	I0729 13:39:51.765716  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.765730  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:51.765740  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:51.765807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:51.800442  301425 cri.go:89] found id: ""
	I0729 13:39:51.800480  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.800491  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:51.800500  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:51.800562  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:51.844516  301425 cri.go:89] found id: ""
	I0729 13:39:51.844542  301425 logs.go:276] 0 containers: []
	W0729 13:39:51.844551  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:51.844562  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:51.844580  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:51.896139  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:51.896176  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:51.910479  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:51.910511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:51.980025  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:51.980052  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:51.980071  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:52.054674  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:52.054717  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.596468  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:54.612233  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:54.612344  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:54.653506  301425 cri.go:89] found id: ""
	I0729 13:39:54.653547  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.653558  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:54.653565  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:54.653624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:54.696964  301425 cri.go:89] found id: ""
	I0729 13:39:54.697002  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.697015  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:54.697023  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:54.697088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:54.731165  301425 cri.go:89] found id: ""
	I0729 13:39:54.731196  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.731207  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:54.731214  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:54.731279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:54.774397  301425 cri.go:89] found id: ""
	I0729 13:39:54.774426  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.774437  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:54.774444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:54.774506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:54.813365  301425 cri.go:89] found id: ""
	I0729 13:39:54.813396  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.813408  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:54.813414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:54.813480  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:54.849936  301425 cri.go:89] found id: ""
	I0729 13:39:54.849962  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.849970  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:54.849980  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:54.850042  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:54.883979  301425 cri.go:89] found id: ""
	I0729 13:39:54.884007  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.884015  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:54.884021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:54.884087  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:54.919754  301425 cri.go:89] found id: ""
	I0729 13:39:54.919779  301425 logs.go:276] 0 containers: []
	W0729 13:39:54.919787  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:54.919796  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:54.919817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:54.973082  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:54.973117  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:54.986534  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:54.986571  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:55.055473  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:55.055499  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:55.055514  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:55.138278  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:55.138322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:39:54.166585  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:56.166714  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.824525  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.824559  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:53.825238  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:55.826464  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.826664  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.683818  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:39:57.698992  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:39:57.699070  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:39:57.742071  301425 cri.go:89] found id: ""
	I0729 13:39:57.742103  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.742113  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:39:57.742121  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:39:57.742185  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:39:57.777871  301425 cri.go:89] found id: ""
	I0729 13:39:57.777902  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.777911  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:39:57.777918  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:39:57.777975  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:39:57.817767  301425 cri.go:89] found id: ""
	I0729 13:39:57.817798  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.817809  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:39:57.817817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:39:57.817889  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:39:57.855608  301425 cri.go:89] found id: ""
	I0729 13:39:57.855634  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.855644  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:39:57.855651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:39:57.855714  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:57.891219  301425 cri.go:89] found id: ""
	I0729 13:39:57.891248  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.891258  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:39:57.891266  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:39:57.891336  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:39:57.926000  301425 cri.go:89] found id: ""
	I0729 13:39:57.926034  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.926045  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:39:57.926053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:39:57.926116  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:39:57.964935  301425 cri.go:89] found id: ""
	I0729 13:39:57.964962  301425 logs.go:276] 0 containers: []
	W0729 13:39:57.964978  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:39:57.964985  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:39:57.965051  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:39:58.001363  301425 cri.go:89] found id: ""
	I0729 13:39:58.001393  301425 logs.go:276] 0 containers: []
	W0729 13:39:58.001405  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:39:58.001417  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:39:58.001434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:39:58.057551  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:39:58.057598  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:39:58.072162  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:39:58.072200  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:39:58.140533  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:39:58.140565  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:39:58.140582  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:39:58.227285  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:39:58.227330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:00.769075  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:00.783394  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:00.783471  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:00.831260  301425 cri.go:89] found id: ""
	I0729 13:40:00.831291  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.831301  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:00.831309  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:00.831370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:00.870017  301425 cri.go:89] found id: ""
	I0729 13:40:00.870045  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.870057  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:00.870065  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:00.870127  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:00.904691  301425 cri.go:89] found id: ""
	I0729 13:40:00.904728  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.904740  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:00.904748  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:00.904828  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:00.937221  301425 cri.go:89] found id: ""
	I0729 13:40:00.937249  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.937259  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:00.937265  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:00.937329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:39:58.167355  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.666837  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:39:57.824755  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.324616  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.325368  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.325689  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:02.326062  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:00.977961  301425 cri.go:89] found id: ""
	I0729 13:40:00.977991  301425 logs.go:276] 0 containers: []
	W0729 13:40:00.978002  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:00.978010  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:00.978104  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:01.014239  301425 cri.go:89] found id: ""
	I0729 13:40:01.014271  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.014283  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:01.014292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:01.014362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:01.050583  301425 cri.go:89] found id: ""
	I0729 13:40:01.050615  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.050630  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:01.050637  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:01.050696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:01.091599  301425 cri.go:89] found id: ""
	I0729 13:40:01.091624  301425 logs.go:276] 0 containers: []
	W0729 13:40:01.091634  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:01.091643  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:01.091661  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:01.146404  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:01.146445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:01.160327  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:01.160358  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:01.237120  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:01.237147  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:01.237162  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:01.321539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:01.321590  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:03.865268  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:03.879648  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:03.879724  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:03.915303  301425 cri.go:89] found id: ""
	I0729 13:40:03.915329  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.915338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:03.915344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:03.915403  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:03.951982  301425 cri.go:89] found id: ""
	I0729 13:40:03.952014  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.952023  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:03.952032  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:03.952099  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:03.989751  301425 cri.go:89] found id: ""
	I0729 13:40:03.989785  301425 logs.go:276] 0 containers: []
	W0729 13:40:03.989796  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:03.989804  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:03.989870  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:04.026934  301425 cri.go:89] found id: ""
	I0729 13:40:04.026975  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.026988  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:04.026996  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:04.027059  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:04.064135  301425 cri.go:89] found id: ""
	I0729 13:40:04.064165  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.064175  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:04.064187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:04.064256  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:04.103080  301425 cri.go:89] found id: ""
	I0729 13:40:04.103108  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.103117  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:04.103123  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:04.103172  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:04.143370  301425 cri.go:89] found id: ""
	I0729 13:40:04.143403  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.143414  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:04.143422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:04.143491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:04.179251  301425 cri.go:89] found id: ""
	I0729 13:40:04.179286  301425 logs.go:276] 0 containers: []
	W0729 13:40:04.179298  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:04.179311  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:04.179330  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:04.261058  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:04.261089  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:04.261111  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:04.342897  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:04.342935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:04.391504  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:04.391532  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:04.443064  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:04.443106  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:03.166195  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:05.166660  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.824882  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:07.324346  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:04.326236  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.825685  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:06.959346  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:06.974377  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:06.974444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:07.007797  301425 cri.go:89] found id: ""
	I0729 13:40:07.007834  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.007847  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:07.007856  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:07.007924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:07.042707  301425 cri.go:89] found id: ""
	I0729 13:40:07.042741  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.042749  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:07.042755  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:07.042807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:07.080150  301425 cri.go:89] found id: ""
	I0729 13:40:07.080185  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.080196  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:07.080203  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:07.080268  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:07.115740  301425 cri.go:89] found id: ""
	I0729 13:40:07.115777  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.115788  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:07.115796  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:07.115888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:07.154110  301425 cri.go:89] found id: ""
	I0729 13:40:07.154141  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.154151  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:07.154158  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:07.154225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:07.190819  301425 cri.go:89] found id: ""
	I0729 13:40:07.190850  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.190858  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:07.190865  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:07.190917  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:07.231530  301425 cri.go:89] found id: ""
	I0729 13:40:07.231560  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.231571  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:07.231579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:07.231643  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:07.272211  301425 cri.go:89] found id: ""
	I0729 13:40:07.272240  301425 logs.go:276] 0 containers: []
	W0729 13:40:07.272247  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:07.272257  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:07.272269  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.326673  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:07.326704  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:07.341255  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:07.341282  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:07.409850  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:07.409878  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:07.409895  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:07.493105  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:07.493169  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.033906  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:10.047938  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:10.048018  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:10.084224  301425 cri.go:89] found id: ""
	I0729 13:40:10.084251  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.084259  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:10.084265  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:10.084316  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:10.120362  301425 cri.go:89] found id: ""
	I0729 13:40:10.120398  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.120409  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:10.120417  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:10.120484  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:10.154128  301425 cri.go:89] found id: ""
	I0729 13:40:10.154160  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.154170  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:10.154178  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:10.154243  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:10.189539  301425 cri.go:89] found id: ""
	I0729 13:40:10.189574  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.189588  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:10.189596  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:10.189661  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:10.228821  301425 cri.go:89] found id: ""
	I0729 13:40:10.228855  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.228867  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:10.228875  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:10.228950  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:10.274726  301425 cri.go:89] found id: ""
	I0729 13:40:10.274758  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.274769  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:10.274776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:10.274845  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:10.308910  301425 cri.go:89] found id: ""
	I0729 13:40:10.308945  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.308956  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:10.308964  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:10.309030  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:10.346008  301425 cri.go:89] found id: ""
	I0729 13:40:10.346044  301425 logs.go:276] 0 containers: []
	W0729 13:40:10.346056  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:10.346069  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:10.346091  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:10.360541  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:10.360581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:10.433763  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:10.433788  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:10.433802  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:10.520366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:10.520418  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:10.561482  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:10.561512  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:07.668816  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:10.166833  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:09.823429  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.824033  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:08.826798  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:11.326762  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.327128  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:13.114858  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:13.128348  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:13.128425  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:13.165329  301425 cri.go:89] found id: ""
	I0729 13:40:13.165359  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.165370  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:13.165377  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:13.165441  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:13.200104  301425 cri.go:89] found id: ""
	I0729 13:40:13.200135  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.200148  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:13.200155  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:13.200224  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:13.238632  301425 cri.go:89] found id: ""
	I0729 13:40:13.238680  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.238688  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:13.238694  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:13.238748  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:13.270859  301425 cri.go:89] found id: ""
	I0729 13:40:13.270892  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.270901  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:13.270907  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:13.270976  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:13.308346  301425 cri.go:89] found id: ""
	I0729 13:40:13.308378  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.308386  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:13.308392  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:13.308444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:13.346286  301425 cri.go:89] found id: ""
	I0729 13:40:13.346319  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.346331  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:13.346339  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:13.346412  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:13.383699  301425 cri.go:89] found id: ""
	I0729 13:40:13.383736  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.383769  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:13.383791  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:13.383850  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:13.419958  301425 cri.go:89] found id: ""
	I0729 13:40:13.420045  301425 logs.go:276] 0 containers: []
	W0729 13:40:13.420058  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:13.420071  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:13.420094  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:13.473984  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:13.474028  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:13.488376  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:13.488410  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:13.559515  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:13.559543  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:13.559560  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:13.640528  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:13.640570  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:12.665799  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.666662  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.668217  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:14.323746  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.323961  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:15.826422  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.326284  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:16.189581  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:16.203962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:16.204052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:16.240537  301425 cri.go:89] found id: ""
	I0729 13:40:16.240572  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.240583  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:16.240591  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:16.240659  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:16.277060  301425 cri.go:89] found id: ""
	I0729 13:40:16.277099  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.277112  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:16.277123  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:16.277200  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:16.313839  301425 cri.go:89] found id: ""
	I0729 13:40:16.313869  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.313878  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:16.313884  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:16.313935  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:16.351806  301425 cri.go:89] found id: ""
	I0729 13:40:16.351840  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.351850  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:16.351858  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:16.351922  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:16.387122  301425 cri.go:89] found id: ""
	I0729 13:40:16.387158  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.387169  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:16.387176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:16.387242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:16.424180  301425 cri.go:89] found id: ""
	I0729 13:40:16.424209  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.424220  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:16.424229  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:16.424292  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:16.461827  301425 cri.go:89] found id: ""
	I0729 13:40:16.461865  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.461879  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:16.461889  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:16.461946  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:16.510198  301425 cri.go:89] found id: ""
	I0729 13:40:16.510230  301425 logs.go:276] 0 containers: []
	W0729 13:40:16.510238  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:16.510248  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:16.510264  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:16.585378  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:16.585420  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:16.629304  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:16.629337  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:16.682386  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:16.682434  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:16.698405  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:16.698436  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:16.770281  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.270551  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:19.284543  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:19.284617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:19.325194  301425 cri.go:89] found id: ""
	I0729 13:40:19.325221  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.325231  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:19.325238  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:19.325298  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:19.362007  301425 cri.go:89] found id: ""
	I0729 13:40:19.362038  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.362058  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:19.362066  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:19.362196  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:19.401162  301425 cri.go:89] found id: ""
	I0729 13:40:19.401191  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.401202  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:19.401210  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:19.401274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:19.434652  301425 cri.go:89] found id: ""
	I0729 13:40:19.434689  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.434700  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:19.434709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:19.434774  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:19.470116  301425 cri.go:89] found id: ""
	I0729 13:40:19.470149  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.470157  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:19.470164  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:19.470218  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:19.503593  301425 cri.go:89] found id: ""
	I0729 13:40:19.503621  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.503629  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:19.503635  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:19.503696  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:19.546127  301425 cri.go:89] found id: ""
	I0729 13:40:19.546155  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.546164  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:19.546169  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:19.546217  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:19.584600  301425 cri.go:89] found id: ""
	I0729 13:40:19.584639  301425 logs.go:276] 0 containers: []
	W0729 13:40:19.584650  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:19.584663  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:19.584681  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:19.599411  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:19.599446  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:19.665811  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:19.665836  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:19.665853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:19.747295  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:19.747339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:19.790476  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:19.790516  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:18.669004  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.166437  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:18.824788  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:21.327093  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:20.825470  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.827651  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:22.346725  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:22.361349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:22.361443  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:22.394840  301425 cri.go:89] found id: ""
	I0729 13:40:22.394870  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.394881  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:22.394889  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:22.394956  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:22.429328  301425 cri.go:89] found id: ""
	I0729 13:40:22.429356  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.429364  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:22.429370  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:22.429431  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:22.463179  301425 cri.go:89] found id: ""
	I0729 13:40:22.463206  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.463214  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:22.463220  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:22.463291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:22.497527  301425 cri.go:89] found id: ""
	I0729 13:40:22.497557  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.497565  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:22.497571  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:22.497627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:22.537607  301425 cri.go:89] found id: ""
	I0729 13:40:22.537635  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.537646  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:22.537654  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:22.537718  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:22.580658  301425 cri.go:89] found id: ""
	I0729 13:40:22.580689  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.580701  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:22.580709  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:22.580775  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:22.622229  301425 cri.go:89] found id: ""
	I0729 13:40:22.622261  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.622270  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:22.622282  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:22.622346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:22.660091  301425 cri.go:89] found id: ""
	I0729 13:40:22.660120  301425 logs.go:276] 0 containers: []
	W0729 13:40:22.660129  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:22.660139  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:22.660153  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:22.715053  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:22.715090  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:22.728865  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:22.728898  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:22.805760  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:22.805785  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:22.805799  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:22.890915  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:22.890960  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:25.457272  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:25.471002  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:25.471088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:25.506190  301425 cri.go:89] found id: ""
	I0729 13:40:25.506226  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.506237  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:25.506244  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:25.506297  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:25.540957  301425 cri.go:89] found id: ""
	I0729 13:40:25.540991  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.541002  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:25.541011  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:25.541074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:25.578378  301425 cri.go:89] found id: ""
	I0729 13:40:25.578424  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.578440  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:25.578448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:25.578518  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:25.620930  301425 cri.go:89] found id: ""
	I0729 13:40:25.620962  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.620979  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:25.620987  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:25.621056  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:25.655558  301425 cri.go:89] found id: ""
	I0729 13:40:25.655589  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.655597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:25.655604  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:25.655670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:25.688810  301425 cri.go:89] found id: ""
	I0729 13:40:25.688845  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.688855  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:25.688863  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:25.688930  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:25.724384  301425 cri.go:89] found id: ""
	I0729 13:40:25.724416  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.724428  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:25.724435  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:25.724514  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:25.763174  301425 cri.go:89] found id: ""
	I0729 13:40:25.763200  301425 logs.go:276] 0 containers: []
	W0729 13:40:25.763209  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:25.763219  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:25.763232  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:25.818517  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:25.818569  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:25.833939  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:25.833973  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:25.910487  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:25.910515  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:25.910537  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:23.167028  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.666513  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:23.824183  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.827054  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.325894  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:27.824855  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:25.993887  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:25.993929  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:28.536843  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:28.550097  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:28.550175  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:28.592664  301425 cri.go:89] found id: ""
	I0729 13:40:28.592697  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.592709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:28.592716  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:28.592788  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:28.638299  301425 cri.go:89] found id: ""
	I0729 13:40:28.638329  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.638337  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:28.638343  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:28.638395  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:28.682410  301425 cri.go:89] found id: ""
	I0729 13:40:28.682437  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.682446  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:28.682452  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:28.682511  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:28.719402  301425 cri.go:89] found id: ""
	I0729 13:40:28.719430  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.719438  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:28.719444  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:28.719504  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:28.767515  301425 cri.go:89] found id: ""
	I0729 13:40:28.767547  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.767559  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:28.767568  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:28.767633  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:28.811600  301425 cri.go:89] found id: ""
	I0729 13:40:28.811632  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.811644  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:28.811652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:28.811727  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:28.853364  301425 cri.go:89] found id: ""
	I0729 13:40:28.853397  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.853407  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:28.853414  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:28.853486  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:28.890981  301425 cri.go:89] found id: ""
	I0729 13:40:28.891013  301425 logs.go:276] 0 containers: []
	W0729 13:40:28.891024  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:28.891035  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:28.891050  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:28.944174  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:28.944213  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:28.957724  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:28.957755  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:29.026457  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:29.026479  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:29.026497  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:29.105366  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:29.105415  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:27.667251  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.166789  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:28.323476  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:30.324242  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:32.325477  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:29.825621  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.828363  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:31.649374  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:31.663432  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:31.663512  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:31.702047  301425 cri.go:89] found id: ""
	I0729 13:40:31.702080  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.702088  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:31.702098  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:31.702162  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:31.738484  301425 cri.go:89] found id: ""
	I0729 13:40:31.738510  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.738518  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:31.738524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:31.738583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:31.774214  301425 cri.go:89] found id: ""
	I0729 13:40:31.774249  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.774261  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:31.774270  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:31.774339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:31.810263  301425 cri.go:89] found id: ""
	I0729 13:40:31.810293  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.810302  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:31.810307  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:31.810369  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:31.848124  301425 cri.go:89] found id: ""
	I0729 13:40:31.848153  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.848160  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:31.848167  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:31.848234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:31.885531  301425 cri.go:89] found id: ""
	I0729 13:40:31.885561  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.885571  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:31.885580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:31.885650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:31.923904  301425 cri.go:89] found id: ""
	I0729 13:40:31.923939  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.923952  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:31.923959  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:31.924029  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:31.957165  301425 cri.go:89] found id: ""
	I0729 13:40:31.957202  301425 logs.go:276] 0 containers: []
	W0729 13:40:31.957213  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:31.957228  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:31.957248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:32.039221  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:32.039262  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.078191  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:32.078229  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:32.131871  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:32.131922  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:32.146676  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:32.146706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:32.223849  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:34.724927  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:34.739029  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:34.739113  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:34.774627  301425 cri.go:89] found id: ""
	I0729 13:40:34.774660  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.774669  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:34.774675  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:34.774743  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:34.809840  301425 cri.go:89] found id: ""
	I0729 13:40:34.809872  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.809882  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:34.809887  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:34.809940  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:34.847530  301425 cri.go:89] found id: ""
	I0729 13:40:34.847561  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.847572  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:34.847580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:34.847648  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:34.881828  301425 cri.go:89] found id: ""
	I0729 13:40:34.881856  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.881870  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:34.881876  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:34.881937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:34.918903  301425 cri.go:89] found id: ""
	I0729 13:40:34.918937  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.918949  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:34.918956  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:34.919015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:34.954714  301425 cri.go:89] found id: ""
	I0729 13:40:34.954749  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.954761  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:34.954770  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:34.954825  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:34.993433  301425 cri.go:89] found id: ""
	I0729 13:40:34.993463  301425 logs.go:276] 0 containers: []
	W0729 13:40:34.993472  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:34.993478  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:34.993531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:35.033830  301425 cri.go:89] found id: ""
	I0729 13:40:35.033859  301425 logs.go:276] 0 containers: []
	W0729 13:40:35.033874  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:35.033884  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:35.033900  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:35.084546  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:35.084595  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:35.098807  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:35.098845  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:35.182636  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:35.182662  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:35.182674  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:35.262767  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:35.262808  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:32.665817  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.670805  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.823905  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.824232  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:34.326644  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:36.825977  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:37.802033  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:37.815633  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:37.815697  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:37.857522  301425 cri.go:89] found id: ""
	I0729 13:40:37.857552  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.857563  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:37.857571  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:37.857627  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:37.897527  301425 cri.go:89] found id: ""
	I0729 13:40:37.897564  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.897575  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:37.897583  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:37.897649  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.937135  301425 cri.go:89] found id: ""
	I0729 13:40:37.937167  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.937176  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:37.937189  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:37.937255  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:37.972699  301425 cri.go:89] found id: ""
	I0729 13:40:37.972734  301425 logs.go:276] 0 containers: []
	W0729 13:40:37.972751  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:37.972761  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:37.972933  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:38.012702  301425 cri.go:89] found id: ""
	I0729 13:40:38.012732  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.012740  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:38.012747  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:38.012832  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:38.050228  301425 cri.go:89] found id: ""
	I0729 13:40:38.050260  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.050268  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:38.050275  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:38.050329  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:38.084665  301425 cri.go:89] found id: ""
	I0729 13:40:38.084693  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.084707  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:38.084715  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:38.084780  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:38.119155  301425 cri.go:89] found id: ""
	I0729 13:40:38.119200  301425 logs.go:276] 0 containers: []
	W0729 13:40:38.119211  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:38.119222  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:38.119236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:38.170934  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:38.170968  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:38.185298  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:38.185329  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:38.256118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:38.256149  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:38.256166  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:38.337090  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:38.337127  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:40.876177  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:40.889580  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:40.889655  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:40.922971  301425 cri.go:89] found id: ""
	I0729 13:40:40.923002  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.923010  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:40.923016  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:40.923074  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:40.955840  301425 cri.go:89] found id: ""
	I0729 13:40:40.955872  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.955884  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:40.955891  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:40.955952  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:37.165718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.166160  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.168344  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:38.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.324607  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:39.324996  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:41.344232  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:40.993258  301425 cri.go:89] found id: ""
	I0729 13:40:40.993290  301425 logs.go:276] 0 containers: []
	W0729 13:40:40.993298  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:40.993305  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:40.993357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:41.026370  301425 cri.go:89] found id: ""
	I0729 13:40:41.026398  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.026409  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:41.026416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:41.026473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:41.060538  301425 cri.go:89] found id: ""
	I0729 13:40:41.060565  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.060574  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:41.060579  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:41.060630  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:41.105074  301425 cri.go:89] found id: ""
	I0729 13:40:41.105108  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.105118  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:41.105126  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:41.105193  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:41.138254  301425 cri.go:89] found id: ""
	I0729 13:40:41.138280  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.138288  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:41.138294  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:41.138342  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:41.171432  301425 cri.go:89] found id: ""
	I0729 13:40:41.171458  301425 logs.go:276] 0 containers: []
	W0729 13:40:41.171466  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:41.171475  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:41.171487  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:41.184703  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:41.184736  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:41.265356  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:41.265392  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:41.265409  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:41.345939  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:41.345979  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:41.388819  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:41.388852  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:43.940388  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:43.955448  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:43.955515  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:43.998457  301425 cri.go:89] found id: ""
	I0729 13:40:43.998494  301425 logs.go:276] 0 containers: []
	W0729 13:40:43.998506  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:43.998515  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:43.998584  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:44.038142  301425 cri.go:89] found id: ""
	I0729 13:40:44.038173  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.038185  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:44.038193  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:44.038260  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:44.077270  301425 cri.go:89] found id: ""
	I0729 13:40:44.077302  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.077313  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:44.077321  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:44.077391  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:44.117612  301425 cri.go:89] found id: ""
	I0729 13:40:44.117641  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.117661  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:44.117681  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:44.117749  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:44.152564  301425 cri.go:89] found id: ""
	I0729 13:40:44.152603  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.152615  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:44.152623  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:44.152683  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:44.188245  301425 cri.go:89] found id: ""
	I0729 13:40:44.188276  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.188288  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:44.188296  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:44.188355  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:44.224947  301425 cri.go:89] found id: ""
	I0729 13:40:44.224975  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.224983  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:44.224989  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:44.225037  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:44.264830  301425 cri.go:89] found id: ""
	I0729 13:40:44.264860  301425 logs.go:276] 0 containers: []
	W0729 13:40:44.264867  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:44.264877  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:44.264893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:44.343145  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:44.343182  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:44.384619  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:44.384650  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:44.438195  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:44.438237  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:44.452115  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:44.452152  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:44.526586  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:43.666987  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.167143  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.825141  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.324972  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:43.827065  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:46.325488  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:47.027726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:47.041174  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:47.041242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:47.079265  301425 cri.go:89] found id: ""
	I0729 13:40:47.079295  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.079304  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:47.079313  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:47.079380  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:47.119775  301425 cri.go:89] found id: ""
	I0729 13:40:47.119807  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.119820  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:47.119828  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:47.119904  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:47.155381  301425 cri.go:89] found id: ""
	I0729 13:40:47.155415  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.155426  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:47.155434  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:47.155490  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:47.195071  301425 cri.go:89] found id: ""
	I0729 13:40:47.195103  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.195111  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:47.195117  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:47.195167  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:47.229487  301425 cri.go:89] found id: ""
	I0729 13:40:47.229519  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.229531  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:47.229539  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:47.229611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:47.266159  301425 cri.go:89] found id: ""
	I0729 13:40:47.266190  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.266201  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:47.266209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:47.266269  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:47.300813  301425 cri.go:89] found id: ""
	I0729 13:40:47.300845  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.300854  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:47.300860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:47.300916  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:47.340378  301425 cri.go:89] found id: ""
	I0729 13:40:47.340412  301425 logs.go:276] 0 containers: []
	W0729 13:40:47.340432  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:47.340444  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:47.340464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:47.395403  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:47.395444  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:47.409505  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:47.409539  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:47.481327  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:47.481349  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:47.481365  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:47.560129  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:47.560172  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.105832  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:50.121192  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:50.121264  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:50.160217  301425 cri.go:89] found id: ""
	I0729 13:40:50.160247  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.160256  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:50.160262  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:50.160313  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:50.199952  301425 cri.go:89] found id: ""
	I0729 13:40:50.199986  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.199998  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:50.200005  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:50.200065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:50.240036  301425 cri.go:89] found id: ""
	I0729 13:40:50.240069  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.240076  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:50.240083  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:50.240134  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:50.279761  301425 cri.go:89] found id: ""
	I0729 13:40:50.279788  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.279796  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:50.279802  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:50.279852  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:50.320324  301425 cri.go:89] found id: ""
	I0729 13:40:50.320350  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.320358  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:50.320364  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:50.320423  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:50.356385  301425 cri.go:89] found id: ""
	I0729 13:40:50.356413  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.356421  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:50.356427  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:50.356482  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:50.396866  301425 cri.go:89] found id: ""
	I0729 13:40:50.396900  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.396912  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:50.396919  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:50.397008  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:50.434778  301425 cri.go:89] found id: ""
	I0729 13:40:50.434812  301425 logs.go:276] 0 containers: []
	W0729 13:40:50.434823  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:50.434836  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:50.434853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:50.447746  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:50.447776  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:50.523750  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:50.523772  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:50.523787  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:50.604206  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:50.604255  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:50.647414  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:50.647449  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:48.666463  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.666670  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.823595  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:50.824045  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:48.826836  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:51.326943  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.327715  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.201653  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:53.215745  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:53.215814  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:53.250482  301425 cri.go:89] found id: ""
	I0729 13:40:53.250508  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.250516  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:53.250522  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:53.250583  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:53.285956  301425 cri.go:89] found id: ""
	I0729 13:40:53.285988  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.285996  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:53.286002  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:53.286055  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:53.320248  301425 cri.go:89] found id: ""
	I0729 13:40:53.320281  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.320292  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:53.320300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:53.320364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:53.355155  301425 cri.go:89] found id: ""
	I0729 13:40:53.355188  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.355200  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:53.355209  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:53.355271  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:53.389519  301425 cri.go:89] found id: ""
	I0729 13:40:53.389549  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.389557  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:53.389564  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:53.389620  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:53.424391  301425 cri.go:89] found id: ""
	I0729 13:40:53.424419  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.424427  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:53.424433  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:53.424492  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:53.463297  301425 cri.go:89] found id: ""
	I0729 13:40:53.463331  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.463342  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:53.463350  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:53.463433  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:53.497565  301425 cri.go:89] found id: ""
	I0729 13:40:53.497593  301425 logs.go:276] 0 containers: []
	W0729 13:40:53.497601  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:53.497610  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:53.497622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:53.548906  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:53.548948  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:53.562789  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:53.562823  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:53.635656  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:53.635679  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:53.635693  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:53.715973  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:53.716024  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:53.166007  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.166420  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:53.324486  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.824480  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:55.825127  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.326505  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:56.258726  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:56.273826  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:56.273905  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:56.310881  301425 cri.go:89] found id: ""
	I0729 13:40:56.310927  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.310936  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:56.310944  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:56.310999  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:56.350104  301425 cri.go:89] found id: ""
	I0729 13:40:56.350139  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.350151  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:56.350158  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:56.350221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:56.385100  301425 cri.go:89] found id: ""
	I0729 13:40:56.385136  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.385145  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:56.385151  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:56.385234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:56.421904  301425 cri.go:89] found id: ""
	I0729 13:40:56.421941  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.421953  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:56.421961  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:56.422025  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:56.457366  301425 cri.go:89] found id: ""
	I0729 13:40:56.457403  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.457414  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:56.457422  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:56.457491  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:56.496700  301425 cri.go:89] found id: ""
	I0729 13:40:56.496732  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.496746  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:56.496755  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:56.496844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:56.532011  301425 cri.go:89] found id: ""
	I0729 13:40:56.532039  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.532047  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:56.532053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:56.532102  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:56.567511  301425 cri.go:89] found id: ""
	I0729 13:40:56.567543  301425 logs.go:276] 0 containers: []
	W0729 13:40:56.567554  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:56.567566  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:56.567581  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:56.615875  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:56.615914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:56.629818  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:56.629862  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:56.703255  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:56.703284  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:56.703298  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:56.786466  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:56.786508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:59.328670  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:40:59.342993  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:40:59.343061  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:40:59.378267  301425 cri.go:89] found id: ""
	I0729 13:40:59.378301  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.378313  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:40:59.378321  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:40:59.378392  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:40:59.415637  301425 cri.go:89] found id: ""
	I0729 13:40:59.415669  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.415680  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:40:59.415687  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:40:59.415759  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:40:59.451170  301425 cri.go:89] found id: ""
	I0729 13:40:59.451204  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.451212  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:40:59.451219  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:40:59.451275  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:40:59.485914  301425 cri.go:89] found id: ""
	I0729 13:40:59.485948  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.485960  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:40:59.485975  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:40:59.486052  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:40:59.523168  301425 cri.go:89] found id: ""
	I0729 13:40:59.523198  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.523208  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:40:59.523216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:40:59.523274  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:40:59.557711  301425 cri.go:89] found id: ""
	I0729 13:40:59.557746  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.557758  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:40:59.557766  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:40:59.557826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:40:59.593387  301425 cri.go:89] found id: ""
	I0729 13:40:59.593421  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.593434  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:40:59.593442  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:40:59.593506  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:40:59.627521  301425 cri.go:89] found id: ""
	I0729 13:40:59.627555  301425 logs.go:276] 0 containers: []
	W0729 13:40:59.627566  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:40:59.627578  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:40:59.627597  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:40:59.677497  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:40:59.677538  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:40:59.692116  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:40:59.692150  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:40:59.759344  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:40:59.759369  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:40:59.759382  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:40:59.840380  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:40:59.840423  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:40:57.166964  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:59.666395  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:01.667229  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:40:58.323708  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.323995  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.325049  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:00.328293  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.826414  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:02.380718  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:02.394436  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:02.394497  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:02.433283  301425 cri.go:89] found id: ""
	I0729 13:41:02.433313  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.433323  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:02.433332  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:02.433393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:02.467206  301425 cri.go:89] found id: ""
	I0729 13:41:02.467232  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.467241  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:02.467247  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:02.467300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:02.502743  301425 cri.go:89] found id: ""
	I0729 13:41:02.502774  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.502783  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:02.502790  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:02.502844  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:02.536415  301425 cri.go:89] found id: ""
	I0729 13:41:02.536449  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.536462  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:02.536470  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:02.536527  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:02.570572  301425 cri.go:89] found id: ""
	I0729 13:41:02.570610  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.570621  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:02.570629  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:02.570702  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:02.606251  301425 cri.go:89] found id: ""
	I0729 13:41:02.606277  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.606285  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:02.606292  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:02.606345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:02.644637  301425 cri.go:89] found id: ""
	I0729 13:41:02.644664  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.644675  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:02.644683  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:02.644750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:02.679493  301425 cri.go:89] found id: ""
	I0729 13:41:02.679519  301425 logs.go:276] 0 containers: []
	W0729 13:41:02.679527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:02.679537  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:02.679553  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:02.734865  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:02.734896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:02.787929  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:02.787962  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:02.801317  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:02.801344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:02.867838  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:02.867862  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:02.867877  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:05.451323  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:05.465262  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:05.465338  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:05.499797  301425 cri.go:89] found id: ""
	I0729 13:41:05.499827  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.499837  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:05.499845  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:05.499912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:05.534363  301425 cri.go:89] found id: ""
	I0729 13:41:05.534403  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.534416  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:05.534424  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:05.534483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:05.571366  301425 cri.go:89] found id: ""
	I0729 13:41:05.571397  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.571408  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:05.571416  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:05.571481  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:05.611301  301425 cri.go:89] found id: ""
	I0729 13:41:05.611335  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.611346  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:05.611355  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:05.611422  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:05.650698  301425 cri.go:89] found id: ""
	I0729 13:41:05.650738  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.650750  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:05.650758  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:05.650823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:05.686166  301425 cri.go:89] found id: ""
	I0729 13:41:05.686204  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.686216  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:05.686225  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:05.686279  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:05.724567  301425 cri.go:89] found id: ""
	I0729 13:41:05.724604  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.724616  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:05.724628  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:05.724691  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:05.760401  301425 cri.go:89] found id: ""
	I0729 13:41:05.760430  301425 logs.go:276] 0 containers: []
	W0729 13:41:05.760438  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:05.760448  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:05.760464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:05.811654  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:05.811698  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:05.827189  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:05.827226  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:05.899612  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:05.899636  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:05.899654  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:04.168533  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.665694  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:04.325443  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:06.824244  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.325499  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:07.326413  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:05.982384  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:05.982425  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.527609  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:08.542024  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:08.542086  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:08.576313  301425 cri.go:89] found id: ""
	I0729 13:41:08.576340  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.576348  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:08.576354  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:08.576406  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:08.609996  301425 cri.go:89] found id: ""
	I0729 13:41:08.610027  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.610038  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:08.610045  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:08.610111  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:08.643722  301425 cri.go:89] found id: ""
	I0729 13:41:08.643750  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.643758  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:08.643765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:08.643815  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:08.679331  301425 cri.go:89] found id: ""
	I0729 13:41:08.679367  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.679378  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:08.679388  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:08.679459  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:08.718348  301425 cri.go:89] found id: ""
	I0729 13:41:08.718376  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.718384  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:08.718390  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:08.718444  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:08.758086  301425 cri.go:89] found id: ""
	I0729 13:41:08.758128  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.758140  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:08.758150  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:08.758225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:08.794304  301425 cri.go:89] found id: ""
	I0729 13:41:08.794333  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.794345  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:08.794354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:08.794415  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:08.835448  301425 cri.go:89] found id: ""
	I0729 13:41:08.835477  301425 logs.go:276] 0 containers: []
	W0729 13:41:08.835486  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:08.835495  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:08.835508  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:08.923886  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:08.923931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:08.963921  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:08.963957  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:09.013852  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:09.013893  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:09.027838  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:09.027872  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:09.097864  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:08.669271  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.165979  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:08.824724  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:10.825582  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:09.327071  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.826906  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:11.598762  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:11.612789  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:11.612903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:11.650029  301425 cri.go:89] found id: ""
	I0729 13:41:11.650063  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.650074  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:11.650084  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:11.650152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:11.687479  301425 cri.go:89] found id: ""
	I0729 13:41:11.687510  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.687520  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:11.687527  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:11.687593  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:11.723788  301425 cri.go:89] found id: ""
	I0729 13:41:11.723816  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.723824  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:11.723830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:11.723878  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:11.760304  301425 cri.go:89] found id: ""
	I0729 13:41:11.760341  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.760353  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:11.760361  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:11.760429  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:11.794175  301425 cri.go:89] found id: ""
	I0729 13:41:11.794202  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.794210  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:11.794216  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:11.794276  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:11.830653  301425 cri.go:89] found id: ""
	I0729 13:41:11.830679  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.830689  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:11.830697  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:11.830755  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:11.869360  301425 cri.go:89] found id: ""
	I0729 13:41:11.869391  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.869403  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:11.869410  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:11.869473  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:11.904164  301425 cri.go:89] found id: ""
	I0729 13:41:11.904195  301425 logs.go:276] 0 containers: []
	W0729 13:41:11.904206  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:11.904218  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:11.904236  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:11.979031  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:11.979054  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:11.979069  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:12.064215  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:12.064254  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:12.101854  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:12.101896  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:12.152327  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:12.152362  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:14.668032  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:14.683118  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:14.683182  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:14.722574  301425 cri.go:89] found id: ""
	I0729 13:41:14.722602  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.722612  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:14.722619  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:14.722686  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:14.759047  301425 cri.go:89] found id: ""
	I0729 13:41:14.759084  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.759094  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:14.759099  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:14.759156  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:14.794363  301425 cri.go:89] found id: ""
	I0729 13:41:14.794400  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.794411  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:14.794418  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:14.794488  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:14.831542  301425 cri.go:89] found id: ""
	I0729 13:41:14.831579  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.831586  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:14.831592  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:14.831650  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:14.878710  301425 cri.go:89] found id: ""
	I0729 13:41:14.878745  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.878758  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:14.878765  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:14.878824  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:14.937804  301425 cri.go:89] found id: ""
	I0729 13:41:14.937837  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.937847  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:14.937856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:14.937923  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:14.985616  301425 cri.go:89] found id: ""
	I0729 13:41:14.985649  301425 logs.go:276] 0 containers: []
	W0729 13:41:14.985658  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:14.985665  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:14.985737  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:15.023210  301425 cri.go:89] found id: ""
	I0729 13:41:15.023248  301425 logs.go:276] 0 containers: []
	W0729 13:41:15.023261  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:15.023273  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:15.023288  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:15.072549  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:15.072587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:15.086624  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:15.086653  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:15.155391  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:15.155412  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:15.155426  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:15.237480  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:15.237535  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:13.666473  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.666831  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:13.324177  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:15.324419  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:14.326023  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:16.826314  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.779568  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:17.794163  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:17.794225  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:17.831416  301425 cri.go:89] found id: ""
	I0729 13:41:17.831446  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.831456  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:17.831463  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:17.831519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:17.868713  301425 cri.go:89] found id: ""
	I0729 13:41:17.868740  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.868752  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:17.868758  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:17.868834  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:17.913159  301425 cri.go:89] found id: ""
	I0729 13:41:17.913200  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.913211  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:17.913221  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:17.913291  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:17.947528  301425 cri.go:89] found id: ""
	I0729 13:41:17.947559  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.947567  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:17.947573  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:17.947693  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:17.982280  301425 cri.go:89] found id: ""
	I0729 13:41:17.982314  301425 logs.go:276] 0 containers: []
	W0729 13:41:17.982323  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:17.982330  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:17.982407  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:18.023729  301425 cri.go:89] found id: ""
	I0729 13:41:18.023767  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.023776  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:18.023783  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:18.023847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:18.061594  301425 cri.go:89] found id: ""
	I0729 13:41:18.061629  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.061637  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:18.061642  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:18.061694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:18.095705  301425 cri.go:89] found id: ""
	I0729 13:41:18.095735  301425 logs.go:276] 0 containers: []
	W0729 13:41:18.095745  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:18.095758  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:18.095778  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:18.175843  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:18.175879  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:18.222979  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:18.223015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:18.277265  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:18.277308  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:18.291002  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:18.291037  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:18.373425  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:20.873958  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:20.888091  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:20.888153  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:20.925850  301425 cri.go:89] found id: ""
	I0729 13:41:20.925886  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.925894  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:20.925901  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:20.925955  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:20.962725  301425 cri.go:89] found id: ""
	I0729 13:41:20.962762  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.962774  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:20.962782  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:20.962847  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:18.166668  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.166993  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:17.827065  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.325697  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:19.325369  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:21.326574  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:20.998741  301425 cri.go:89] found id: ""
	I0729 13:41:20.998778  301425 logs.go:276] 0 containers: []
	W0729 13:41:20.998787  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:20.998794  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:20.998842  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:21.036370  301425 cri.go:89] found id: ""
	I0729 13:41:21.036401  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.036410  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:21.036417  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:21.036483  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:21.071560  301425 cri.go:89] found id: ""
	I0729 13:41:21.071588  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.071597  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:21.071605  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:21.071670  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:21.106778  301425 cri.go:89] found id: ""
	I0729 13:41:21.106810  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.106822  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:21.106830  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:21.106890  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:21.139901  301425 cri.go:89] found id: ""
	I0729 13:41:21.139926  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.139934  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:21.139940  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:21.140001  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:21.173281  301425 cri.go:89] found id: ""
	I0729 13:41:21.173312  301425 logs.go:276] 0 containers: []
	W0729 13:41:21.173320  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:21.173330  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:21.173344  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:21.225055  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:21.225095  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:21.239780  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:21.239864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:21.313460  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:21.313486  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:21.313504  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:21.398557  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:21.398599  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:23.937873  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:23.951595  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:23.951653  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:23.987177  301425 cri.go:89] found id: ""
	I0729 13:41:23.987208  301425 logs.go:276] 0 containers: []
	W0729 13:41:23.987217  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:23.987225  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:23.987324  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:24.030197  301425 cri.go:89] found id: ""
	I0729 13:41:24.030251  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.030264  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:24.030272  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:24.030339  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:24.068031  301425 cri.go:89] found id: ""
	I0729 13:41:24.068061  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.068074  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:24.068081  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:24.068154  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:24.107192  301425 cri.go:89] found id: ""
	I0729 13:41:24.107221  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.107232  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:24.107239  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:24.107304  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:24.143154  301425 cri.go:89] found id: ""
	I0729 13:41:24.143182  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.143190  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:24.143196  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:24.143248  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:24.181268  301425 cri.go:89] found id: ""
	I0729 13:41:24.181296  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.181304  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:24.181311  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:24.181370  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:24.215248  301425 cri.go:89] found id: ""
	I0729 13:41:24.215284  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.215293  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:24.215299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:24.215363  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:24.250796  301425 cri.go:89] found id: ""
	I0729 13:41:24.250822  301425 logs.go:276] 0 containers: []
	W0729 13:41:24.250831  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:24.250841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:24.250853  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:24.305841  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:24.305883  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:24.320182  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:24.320214  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:24.389667  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:24.389690  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:24.389707  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:24.471435  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:24.471479  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:22.665718  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.166432  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:22.824348  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:24.826598  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:26.828504  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:23.825754  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:25.834253  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:28.329733  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:27.014508  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:27.029318  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:27.029382  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:27.064115  301425 cri.go:89] found id: ""
	I0729 13:41:27.064150  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.064161  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:27.064169  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:27.064250  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:27.099081  301425 cri.go:89] found id: ""
	I0729 13:41:27.099110  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.099123  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:27.099131  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:27.099197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:27.132475  301425 cri.go:89] found id: ""
	I0729 13:41:27.132506  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.132518  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:27.132527  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:27.132595  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:27.168924  301425 cri.go:89] found id: ""
	I0729 13:41:27.168948  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.168956  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:27.168962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:27.169015  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:27.204052  301425 cri.go:89] found id: ""
	I0729 13:41:27.204082  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.204094  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:27.204109  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:27.204170  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:27.238355  301425 cri.go:89] found id: ""
	I0729 13:41:27.238383  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.238391  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:27.238397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:27.238496  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:27.276104  301425 cri.go:89] found id: ""
	I0729 13:41:27.276139  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.276150  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:27.276157  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:27.276222  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:27.308612  301425 cri.go:89] found id: ""
	I0729 13:41:27.308643  301425 logs.go:276] 0 containers: []
	W0729 13:41:27.308654  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:27.308667  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:27.308683  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:27.362472  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:27.362511  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:27.376349  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:27.376383  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:27.458450  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:27.458472  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:27.458486  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:27.536405  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:27.536445  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:30.076285  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:30.091308  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:30.091386  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:30.138335  301425 cri.go:89] found id: ""
	I0729 13:41:30.138369  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.138381  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:30.138389  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:30.138454  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:30.176395  301425 cri.go:89] found id: ""
	I0729 13:41:30.176425  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.176435  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:30.176443  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:30.176495  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:30.214990  301425 cri.go:89] found id: ""
	I0729 13:41:30.215027  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.215035  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:30.215041  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:30.215090  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:30.252051  301425 cri.go:89] found id: ""
	I0729 13:41:30.252080  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.252088  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:30.252094  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:30.252155  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:30.287210  301425 cri.go:89] found id: ""
	I0729 13:41:30.287240  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.287249  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:30.287254  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:30.287337  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:30.322813  301425 cri.go:89] found id: ""
	I0729 13:41:30.322842  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.322851  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:30.322857  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:30.322924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:30.358697  301425 cri.go:89] found id: ""
	I0729 13:41:30.358730  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.358738  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:30.358744  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:30.358804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:30.394252  301425 cri.go:89] found id: ""
	I0729 13:41:30.394283  301425 logs.go:276] 0 containers: []
	W0729 13:41:30.394294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:30.394305  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:30.394321  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:30.446777  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:30.446820  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:30.461564  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:30.461605  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:30.537918  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:30.537942  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:30.537958  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:30.613821  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:30.613865  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:27.167654  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.666133  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:29.323396  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:31.324718  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:30.825879  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:32.826458  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.154081  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:33.168252  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:33.168353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:33.205675  301425 cri.go:89] found id: ""
	I0729 13:41:33.205708  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.205719  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:33.205727  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:33.205799  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:33.240556  301425 cri.go:89] found id: ""
	I0729 13:41:33.240582  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.240590  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:33.240596  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:33.240644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:33.276662  301425 cri.go:89] found id: ""
	I0729 13:41:33.276690  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.276698  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:33.276704  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:33.276773  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:33.318631  301425 cri.go:89] found id: ""
	I0729 13:41:33.318667  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.318677  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:33.318685  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:33.318762  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:33.354372  301425 cri.go:89] found id: ""
	I0729 13:41:33.354403  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.354412  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:33.354421  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:33.354475  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:33.389309  301425 cri.go:89] found id: ""
	I0729 13:41:33.389337  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.389346  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:33.389352  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:33.389404  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:33.423689  301425 cri.go:89] found id: ""
	I0729 13:41:33.423732  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.423745  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:33.423753  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:33.423823  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:33.457556  301425 cri.go:89] found id: ""
	I0729 13:41:33.457593  301425 logs.go:276] 0 containers: []
	W0729 13:41:33.457605  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:33.457618  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:33.457634  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:33.534377  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:33.534416  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:33.579646  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:33.579689  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:33.629784  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:33.629819  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:33.643878  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:33.643912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:33.716446  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:32.167152  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:34.666054  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.667479  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:33.823726  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.824199  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:35.324827  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.325672  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:36.216598  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:36.229904  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:36.230003  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:36.263721  301425 cri.go:89] found id: ""
	I0729 13:41:36.263752  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.263771  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:36.263786  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:36.263838  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:36.297900  301425 cri.go:89] found id: ""
	I0729 13:41:36.297932  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.297950  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:36.297958  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:36.298023  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:36.338037  301425 cri.go:89] found id: ""
	I0729 13:41:36.338064  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.338072  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:36.338078  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:36.338125  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:36.375334  301425 cri.go:89] found id: ""
	I0729 13:41:36.375362  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.375370  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:36.375375  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:36.375426  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:36.410760  301425 cri.go:89] found id: ""
	I0729 13:41:36.410794  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.410805  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:36.410813  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:36.410888  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:36.445247  301425 cri.go:89] found id: ""
	I0729 13:41:36.445280  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.445291  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:36.445300  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:36.445364  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:36.487183  301425 cri.go:89] found id: ""
	I0729 13:41:36.487214  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.487221  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:36.487228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:36.487301  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:36.522407  301425 cri.go:89] found id: ""
	I0729 13:41:36.522433  301425 logs.go:276] 0 containers: []
	W0729 13:41:36.522442  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:36.522453  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:36.522468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:36.537163  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:36.537197  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:36.608334  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:36.608361  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:36.608376  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:36.689026  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:36.689074  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:36.728580  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:36.728618  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.279605  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:39.293259  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:39.293320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:39.329070  301425 cri.go:89] found id: ""
	I0729 13:41:39.329095  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.329103  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:39.329109  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:39.329160  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:39.362992  301425 cri.go:89] found id: ""
	I0729 13:41:39.363023  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.363032  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:39.363038  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:39.363100  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:39.403094  301425 cri.go:89] found id: ""
	I0729 13:41:39.403128  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.403140  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:39.403147  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:39.403201  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:39.435761  301425 cri.go:89] found id: ""
	I0729 13:41:39.435795  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.435806  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:39.435814  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:39.435881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:39.468299  301425 cri.go:89] found id: ""
	I0729 13:41:39.468332  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.468341  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:39.468349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:39.468417  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:39.505114  301425 cri.go:89] found id: ""
	I0729 13:41:39.505149  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.505162  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:39.505172  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:39.505234  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:39.536942  301425 cri.go:89] found id: ""
	I0729 13:41:39.536975  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.536986  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:39.536994  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:39.537064  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:39.577394  301425 cri.go:89] found id: ""
	I0729 13:41:39.577427  301425 logs.go:276] 0 containers: []
	W0729 13:41:39.577439  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:39.577451  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:39.577468  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:39.631143  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:39.631184  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:39.645020  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:39.645047  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:39.718256  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:39.718283  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:39.718297  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:39.801990  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:39.802036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:39.166762  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.167646  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:37.824966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.825836  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.324009  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:39.327169  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:41.826091  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:42.347066  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:42.359902  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:42.359983  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:42.395494  301425 cri.go:89] found id: ""
	I0729 13:41:42.395529  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.395540  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:42.395548  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:42.395611  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:42.429305  301425 cri.go:89] found id: ""
	I0729 13:41:42.429334  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.429343  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:42.429350  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:42.429401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:42.466902  301425 cri.go:89] found id: ""
	I0729 13:41:42.466931  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.466942  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:42.466949  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:42.467017  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:42.504582  301425 cri.go:89] found id: ""
	I0729 13:41:42.504618  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.504628  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:42.504652  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:42.504717  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:42.539649  301425 cri.go:89] found id: ""
	I0729 13:41:42.539676  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.539686  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:42.539695  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:42.539758  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:42.579209  301425 cri.go:89] found id: ""
	I0729 13:41:42.579238  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.579249  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:42.579257  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:42.579320  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:42.614832  301425 cri.go:89] found id: ""
	I0729 13:41:42.614861  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.614869  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:42.614874  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:42.614925  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:42.651837  301425 cri.go:89] found id: ""
	I0729 13:41:42.651865  301425 logs.go:276] 0 containers: []
	W0729 13:41:42.651873  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:42.651883  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:42.651899  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:42.707149  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:42.707190  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:42.720990  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:42.721043  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:42.789818  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:42.789849  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:42.789867  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:42.871880  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:42.871934  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.416172  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:45.428923  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:45.428994  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:45.466667  301425 cri.go:89] found id: ""
	I0729 13:41:45.466699  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.466710  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:45.466717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:45.466783  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:45.501779  301425 cri.go:89] found id: ""
	I0729 13:41:45.501813  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.501825  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:45.501832  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:45.501896  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:45.537507  301425 cri.go:89] found id: ""
	I0729 13:41:45.537537  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.537547  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:45.537554  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:45.537619  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:45.575430  301425 cri.go:89] found id: ""
	I0729 13:41:45.575460  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.575467  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:45.575474  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:45.575523  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:45.613009  301425 cri.go:89] found id: ""
	I0729 13:41:45.613038  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.613047  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:45.613053  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:45.613103  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:45.650734  301425 cri.go:89] found id: ""
	I0729 13:41:45.650767  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.650778  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:45.650786  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:45.650853  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:45.684301  301425 cri.go:89] found id: ""
	I0729 13:41:45.684332  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.684341  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:45.684349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:45.684416  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:45.719861  301425 cri.go:89] found id: ""
	I0729 13:41:45.719901  301425 logs.go:276] 0 containers: []
	W0729 13:41:45.719911  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:45.719921  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:45.719936  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:45.800422  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:45.800464  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:45.842460  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:45.842493  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:45.897388  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:45.897430  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:45.911554  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:45.911587  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:41:43.665771  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.666196  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:44.325813  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:46.824774  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:43.828518  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:45.830106  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:48.325196  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	W0729 13:41:45.984435  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.485014  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:48.498038  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:48.498110  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:48.534248  301425 cri.go:89] found id: ""
	I0729 13:41:48.534280  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.534291  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:48.534299  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:48.534362  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:48.572411  301425 cri.go:89] found id: ""
	I0729 13:41:48.572445  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.572457  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:48.572465  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:48.572524  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:48.612345  301425 cri.go:89] found id: ""
	I0729 13:41:48.612373  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.612381  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:48.612387  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:48.612450  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:48.650334  301425 cri.go:89] found id: ""
	I0729 13:41:48.650385  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.650395  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:48.650401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:48.650466  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:48.687460  301425 cri.go:89] found id: ""
	I0729 13:41:48.687490  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.687501  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:48.687508  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:48.687572  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:48.735028  301425 cri.go:89] found id: ""
	I0729 13:41:48.735064  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.735077  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:48.735085  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:48.735142  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:48.771175  301425 cri.go:89] found id: ""
	I0729 13:41:48.771209  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.771220  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:48.771228  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:48.771300  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:48.808267  301425 cri.go:89] found id: ""
	I0729 13:41:48.808295  301425 logs.go:276] 0 containers: []
	W0729 13:41:48.808304  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:48.808314  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:48.808328  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:48.850520  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:48.850557  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:48.902563  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:48.902612  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:48.919082  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:48.919114  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:48.999185  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:48.999213  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:48.999241  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:48.166020  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.166237  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:49.323402  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.326596  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:50.825399  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:52.831823  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:51.579922  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:51.593149  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:51.593213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:51.626302  301425 cri.go:89] found id: ""
	I0729 13:41:51.626330  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.626338  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:51.626344  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:51.626393  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:51.659551  301425 cri.go:89] found id: ""
	I0729 13:41:51.659578  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.659586  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:51.659592  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:51.659642  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:51.696842  301425 cri.go:89] found id: ""
	I0729 13:41:51.696868  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.696876  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:51.696882  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:51.696937  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:51.737209  301425 cri.go:89] found id: ""
	I0729 13:41:51.737237  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.737246  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:51.737253  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:51.737317  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:51.772782  301425 cri.go:89] found id: ""
	I0729 13:41:51.772829  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.772842  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:51.772850  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:51.772921  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:51.806649  301425 cri.go:89] found id: ""
	I0729 13:41:51.806679  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.806690  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:51.806698  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:51.806771  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:51.848950  301425 cri.go:89] found id: ""
	I0729 13:41:51.848978  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.848989  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:51.848997  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:51.849065  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:51.884875  301425 cri.go:89] found id: ""
	I0729 13:41:51.884902  301425 logs.go:276] 0 containers: []
	W0729 13:41:51.884910  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:51.884920  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:51.884932  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:51.964282  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:51.964322  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:52.004218  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:52.004251  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:52.056230  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:52.056266  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.069591  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:52.069622  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:52.142552  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:54.643154  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:54.657199  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:54.657259  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:54.694124  301425 cri.go:89] found id: ""
	I0729 13:41:54.694152  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.694159  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:54.694165  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:54.694221  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:54.732072  301425 cri.go:89] found id: ""
	I0729 13:41:54.732109  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.732119  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:54.732127  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:54.732194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:54.768257  301425 cri.go:89] found id: ""
	I0729 13:41:54.768294  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.768306  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:54.768314  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:54.768383  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:54.807596  301425 cri.go:89] found id: ""
	I0729 13:41:54.807631  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.807643  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:54.807651  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:54.807716  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:54.845107  301425 cri.go:89] found id: ""
	I0729 13:41:54.845134  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.845142  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:54.845148  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:54.845197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:54.880627  301425 cri.go:89] found id: ""
	I0729 13:41:54.880655  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.880667  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:54.880675  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:54.880750  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:54.918122  301425 cri.go:89] found id: ""
	I0729 13:41:54.918151  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.918159  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:54.918165  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:54.918219  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:54.956943  301425 cri.go:89] found id: ""
	I0729 13:41:54.956986  301425 logs.go:276] 0 containers: []
	W0729 13:41:54.956999  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:54.957022  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:54.957036  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:55.032512  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:55.032547  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:55.032564  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:55.116653  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:55.116699  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:55.177030  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:55.177059  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:41:55.238789  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:55.238831  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:52.166339  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:54.666569  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:53.824694  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:56.324761  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:55.324698  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.326135  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:57.753504  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:41:57.766354  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:41:57.766436  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:41:57.802691  301425 cri.go:89] found id: ""
	I0729 13:41:57.802728  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.802740  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:41:57.802746  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:41:57.802807  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:41:57.839800  301425 cri.go:89] found id: ""
	I0729 13:41:57.839823  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.839830  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:41:57.839846  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:41:57.839902  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:41:57.881592  301425 cri.go:89] found id: ""
	I0729 13:41:57.881617  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.881625  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:41:57.881631  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:41:57.881681  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.916245  301425 cri.go:89] found id: ""
	I0729 13:41:57.916273  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.916282  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:41:57.916290  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:41:57.916346  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:41:57.952224  301425 cri.go:89] found id: ""
	I0729 13:41:57.952261  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.952272  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:41:57.952280  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:41:57.952340  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:41:57.985508  301425 cri.go:89] found id: ""
	I0729 13:41:57.985537  301425 logs.go:276] 0 containers: []
	W0729 13:41:57.985548  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:41:57.985557  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:41:57.985624  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:41:58.022354  301425 cri.go:89] found id: ""
	I0729 13:41:58.022382  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.022391  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:41:58.022397  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:41:58.022462  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:41:58.055865  301425 cri.go:89] found id: ""
	I0729 13:41:58.055891  301425 logs.go:276] 0 containers: []
	W0729 13:41:58.055900  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:41:58.055914  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:41:58.055931  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:41:58.069143  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:41:58.069177  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:41:58.143137  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:41:58.143164  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:41:58.143183  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:41:58.224631  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:41:58.224672  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:41:58.266437  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:41:58.266470  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:00.819300  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:00.834195  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:00.834258  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:00.869660  301425 cri.go:89] found id: ""
	I0729 13:42:00.869697  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.869709  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:00.869717  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:00.869777  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:00.915601  301425 cri.go:89] found id: ""
	I0729 13:42:00.915630  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.915638  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:00.915644  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:00.915694  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:00.956981  301425 cri.go:89] found id: ""
	I0729 13:42:00.957020  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.957028  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:00.957034  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:00.957094  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:41:57.166038  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.666455  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.666824  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:58.824729  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.825513  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:41:59.825074  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:01.826480  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:00.995761  301425 cri.go:89] found id: ""
	I0729 13:42:00.995793  301425 logs.go:276] 0 containers: []
	W0729 13:42:00.995801  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:00.995817  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:00.995869  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:01.047668  301425 cri.go:89] found id: ""
	I0729 13:42:01.047699  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.047707  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:01.047713  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:01.047787  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:01.085178  301425 cri.go:89] found id: ""
	I0729 13:42:01.085209  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.085217  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:01.085224  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:01.085278  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:01.125282  301425 cri.go:89] found id: ""
	I0729 13:42:01.125310  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.125320  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:01.125329  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:01.125396  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:01.165972  301425 cri.go:89] found id: ""
	I0729 13:42:01.166005  301425 logs.go:276] 0 containers: []
	W0729 13:42:01.166021  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:01.166033  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:01.166049  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:01.236500  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:01.236523  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:01.236540  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:01.320918  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:01.320959  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:01.366975  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:01.367015  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:01.420347  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:01.420389  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:03.936048  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:03.949603  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:03.949679  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:03.987529  301425 cri.go:89] found id: ""
	I0729 13:42:03.987557  301425 logs.go:276] 0 containers: []
	W0729 13:42:03.987567  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:03.987574  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:03.987639  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:04.027325  301425 cri.go:89] found id: ""
	I0729 13:42:04.027355  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.027365  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:04.027372  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:04.027437  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:04.063019  301425 cri.go:89] found id: ""
	I0729 13:42:04.063050  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.063059  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:04.063065  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:04.063117  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:04.101106  301425 cri.go:89] found id: ""
	I0729 13:42:04.101135  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.101146  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:04.101153  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:04.101242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:04.137186  301425 cri.go:89] found id: ""
	I0729 13:42:04.137219  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.137230  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:04.137238  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:04.137302  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:04.175732  301425 cri.go:89] found id: ""
	I0729 13:42:04.175761  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.175770  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:04.175776  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:04.175826  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:04.213265  301425 cri.go:89] found id: ""
	I0729 13:42:04.213296  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.213307  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:04.213315  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:04.213381  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:04.248581  301425 cri.go:89] found id: ""
	I0729 13:42:04.248609  301425 logs.go:276] 0 containers: []
	W0729 13:42:04.248617  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:04.248627  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:04.248643  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:04.303277  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:04.303400  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:04.317518  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:04.317547  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:04.385209  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:04.385229  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:04.385242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:04.470629  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:04.470680  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:04.167299  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.168006  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.324087  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:05.324904  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:03.826588  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:06.325326  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:08.326125  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.012455  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:07.028535  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:07.028621  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:07.063453  301425 cri.go:89] found id: ""
	I0729 13:42:07.063496  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.063505  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:07.063511  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:07.063582  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:07.098243  301425 cri.go:89] found id: ""
	I0729 13:42:07.098274  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.098284  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:07.098291  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:07.098357  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:07.138122  301425 cri.go:89] found id: ""
	I0729 13:42:07.138149  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.138157  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:07.138162  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:07.138213  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:07.176772  301425 cri.go:89] found id: ""
	I0729 13:42:07.176814  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.176826  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:07.176835  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:07.176894  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:07.214867  301425 cri.go:89] found id: ""
	I0729 13:42:07.214898  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.214914  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:07.214920  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:07.214979  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:07.253443  301425 cri.go:89] found id: ""
	I0729 13:42:07.253471  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.253481  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:07.253490  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:07.253550  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:07.287284  301425 cri.go:89] found id: ""
	I0729 13:42:07.287326  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.287338  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:07.287349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:07.287411  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:07.330550  301425 cri.go:89] found id: ""
	I0729 13:42:07.330577  301425 logs.go:276] 0 containers: []
	W0729 13:42:07.330588  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:07.330599  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:07.330620  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:07.384226  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:07.384268  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:07.398790  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:07.398817  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:07.462868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:07.462893  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:07.462914  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:07.538665  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:07.538706  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.078452  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:10.091962  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:10.092027  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:10.127401  301425 cri.go:89] found id: ""
	I0729 13:42:10.127434  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.127445  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:10.127454  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:10.127531  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:10.161088  301425 cri.go:89] found id: ""
	I0729 13:42:10.161117  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.161127  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:10.161134  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:10.161187  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:10.199721  301425 cri.go:89] found id: ""
	I0729 13:42:10.199751  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.199763  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:10.199769  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:10.199821  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:10.237067  301425 cri.go:89] found id: ""
	I0729 13:42:10.237106  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.237120  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:10.237127  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:10.237191  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:10.275863  301425 cri.go:89] found id: ""
	I0729 13:42:10.275894  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.275909  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:10.275918  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:10.275981  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:10.313234  301425 cri.go:89] found id: ""
	I0729 13:42:10.313262  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.313270  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:10.313276  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:10.313334  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:10.353530  301425 cri.go:89] found id: ""
	I0729 13:42:10.353558  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.353569  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:10.353576  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:10.353644  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:10.389488  301425 cri.go:89] found id: ""
	I0729 13:42:10.389516  301425 logs.go:276] 0 containers: []
	W0729 13:42:10.389527  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:10.389539  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:10.389562  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:10.428705  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:10.428740  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:10.484413  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:10.484456  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:10.499203  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:10.499248  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:10.570868  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:10.570894  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:10.570907  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:08.667158  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:11.166721  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:07.825638  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.324753  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:10.326752  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:12.826001  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:13.151788  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:13.165297  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:13.165367  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:13.203752  301425 cri.go:89] found id: ""
	I0729 13:42:13.203786  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.203798  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:13.203805  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:13.203874  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:13.240454  301425 cri.go:89] found id: ""
	I0729 13:42:13.240491  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.240499  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:13.240504  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:13.240556  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:13.276508  301425 cri.go:89] found id: ""
	I0729 13:42:13.276536  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.276545  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:13.276553  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:13.276617  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:13.311252  301425 cri.go:89] found id: ""
	I0729 13:42:13.311280  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.311291  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:13.311299  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:13.311353  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:13.351777  301425 cri.go:89] found id: ""
	I0729 13:42:13.351808  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.351817  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:13.351823  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:13.351881  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:13.389020  301425 cri.go:89] found id: ""
	I0729 13:42:13.389049  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.389058  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:13.389064  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:13.389126  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:13.424353  301425 cri.go:89] found id: ""
	I0729 13:42:13.424387  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.424395  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:13.424401  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:13.424451  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:13.460755  301425 cri.go:89] found id: ""
	I0729 13:42:13.460788  301425 logs.go:276] 0 containers: []
	W0729 13:42:13.460817  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:13.460830  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:13.460850  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:13.500201  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:13.500234  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:13.553319  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:13.553357  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:13.567496  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:13.567529  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:13.644662  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:13.644686  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:13.644700  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:13.667287  301044 pod_ready.go:102] pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.160289  301044 pod_ready.go:81] duration metric: took 4m0.000442608s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:16.160321  301044 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dlrjb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 13:42:16.160342  301044 pod_ready.go:38] duration metric: took 4m7.984743222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:16.160378  301044 kubeadm.go:597] duration metric: took 4m16.091281244s to restartPrimaryControlPlane
	W0729 13:42:16.160459  301044 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:16.160486  301044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:12.825387  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.826853  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:16.827679  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:14.829149  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326337  300746 pod_ready.go:102] pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:17.326370  300746 pod_ready.go:81] duration metric: took 4m0.007721109s for pod "metrics-server-78fcd8795b-dv8pr" in "kube-system" namespace to be "Ready" ...
	E0729 13:42:17.326383  300746 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:42:17.326392  300746 pod_ready.go:38] duration metric: took 4m8.417741792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:42:17.326410  300746 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:42:17.326446  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:17.326514  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:17.373993  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.374027  300746 cri.go:89] found id: ""
	I0729 13:42:17.374037  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:17.374118  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.384841  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:17.384929  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:17.422219  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.422253  300746 cri.go:89] found id: ""
	I0729 13:42:17.422263  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:17.422349  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.427319  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:17.427385  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:17.469310  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:17.469336  300746 cri.go:89] found id: ""
	I0729 13:42:17.469347  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:17.469412  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.474501  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:17.474590  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:17.520767  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:17.520808  300746 cri.go:89] found id: ""
	I0729 13:42:17.520818  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:17.520881  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.525543  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:17.525643  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:17.572718  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.572749  300746 cri.go:89] found id: ""
	I0729 13:42:17.572758  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:17.572839  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.577227  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:17.577304  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:17.614076  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.614098  300746 cri.go:89] found id: ""
	I0729 13:42:17.614106  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:17.614153  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.618404  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:17.618479  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:17.666242  300746 cri.go:89] found id: ""
	I0729 13:42:17.666275  300746 logs.go:276] 0 containers: []
	W0729 13:42:17.666285  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:17.666301  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:17.666373  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:17.713379  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:17.713411  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:17.713418  300746 cri.go:89] found id: ""
	I0729 13:42:17.713428  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:17.713493  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.719026  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:17.723948  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:17.723974  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:17.743561  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:17.743607  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:17.803393  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:17.803425  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:17.855689  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:17.855723  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:17.898327  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:17.898361  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:17.951024  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:17.951060  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:18.014040  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:18.014082  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:18.159937  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:18.159984  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:18.201626  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:18.201667  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:18.247168  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:18.247211  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:18.291431  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:18.291469  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:18.333636  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:18.333671  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.226602  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:16.242934  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:16.243005  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:16.284033  301425 cri.go:89] found id: ""
	I0729 13:42:16.284064  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.284075  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:16.284083  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:16.284152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:16.328362  301425 cri.go:89] found id: ""
	I0729 13:42:16.328388  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.328396  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:16.328402  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:16.328464  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:16.372664  301425 cri.go:89] found id: ""
	I0729 13:42:16.372701  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.372712  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:16.372727  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:16.372818  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:16.416085  301425 cri.go:89] found id: ""
	I0729 13:42:16.416119  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.416130  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:16.416138  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:16.416194  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:16.457786  301425 cri.go:89] found id: ""
	I0729 13:42:16.457819  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.457830  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:16.457838  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:16.457903  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:16.498929  301425 cri.go:89] found id: ""
	I0729 13:42:16.498962  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.498971  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:16.498979  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:16.499043  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:16.546159  301425 cri.go:89] found id: ""
	I0729 13:42:16.546187  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.546199  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:16.546207  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:16.546270  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:16.585010  301425 cri.go:89] found id: ""
	I0729 13:42:16.585041  301425 logs.go:276] 0 containers: []
	W0729 13:42:16.585052  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:16.585065  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:16.585081  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:16.639033  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:16.639079  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:16.656209  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:16.656242  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:16.734835  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:16.734863  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:16.734940  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:16.818756  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:16.818798  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.370796  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:19.384267  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:19.384354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:19.425595  301425 cri.go:89] found id: ""
	I0729 13:42:19.425629  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.425641  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:19.425650  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:19.425715  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:19.461470  301425 cri.go:89] found id: ""
	I0729 13:42:19.461506  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.461517  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:19.461524  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:19.461592  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:19.508232  301425 cri.go:89] found id: ""
	I0729 13:42:19.508265  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.508275  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:19.508283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:19.508360  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:19.546226  301425 cri.go:89] found id: ""
	I0729 13:42:19.546259  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.546275  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:19.546283  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:19.546354  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:19.581125  301425 cri.go:89] found id: ""
	I0729 13:42:19.581156  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.581167  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:19.581176  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:19.581242  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:19.619680  301425 cri.go:89] found id: ""
	I0729 13:42:19.619719  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.619728  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:19.619736  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:19.619800  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:19.657096  301425 cri.go:89] found id: ""
	I0729 13:42:19.657126  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.657136  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:19.657142  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:19.657203  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:19.697247  301425 cri.go:89] found id: ""
	I0729 13:42:19.697277  301425 logs.go:276] 0 containers: []
	W0729 13:42:19.697286  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:19.697297  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:19.697312  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:19.714900  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:19.714935  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:19.794118  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:19.794145  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:19.794161  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:19.907077  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:19.907122  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:19.949841  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:19.949871  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:19.324474  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:21.826117  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:18.858720  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:18.858773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:21.419344  300746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:21.440121  300746 api_server.go:72] duration metric: took 4m17.790553991s to wait for apiserver process to appear ...
	I0729 13:42:21.440149  300746 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:42:21.440190  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:21.440242  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:21.485874  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:21.485897  300746 cri.go:89] found id: ""
	I0729 13:42:21.485905  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:21.485956  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.490424  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:21.490493  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:21.532174  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:21.532202  300746 cri.go:89] found id: ""
	I0729 13:42:21.532211  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:21.532259  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.536561  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:21.536622  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:21.579375  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:21.579397  300746 cri.go:89] found id: ""
	I0729 13:42:21.579404  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:21.579450  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.584710  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:21.584779  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:21.621437  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.621465  300746 cri.go:89] found id: ""
	I0729 13:42:21.621475  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:21.621536  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.625829  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:21.625898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:21.666063  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:21.666086  300746 cri.go:89] found id: ""
	I0729 13:42:21.666095  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:21.666162  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.670822  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:21.670898  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:21.713993  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:21.714022  300746 cri.go:89] found id: ""
	I0729 13:42:21.714032  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:21.714099  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.718967  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:21.719044  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:21.761282  300746 cri.go:89] found id: ""
	I0729 13:42:21.761312  300746 logs.go:276] 0 containers: []
	W0729 13:42:21.761320  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:21.761327  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:21.761390  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:21.810085  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:21.810114  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:21.810121  300746 cri.go:89] found id: ""
	I0729 13:42:21.810130  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:21.810185  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.814713  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:21.819968  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:21.819996  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:21.834798  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:21.834823  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:21.957963  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:21.958000  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:21.995345  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:21.995376  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:22.037737  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:22.037773  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:22.074774  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:22.074813  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:22.123172  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.123205  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.181432  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:22.181473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:22.237128  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:22.237162  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:22.285733  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:22.285766  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:22.328258  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:22.328291  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:22.381239  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.381276  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:22.840466  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:22.840504  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:22.515296  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:22.529187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:22.529286  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:22.573033  301425 cri.go:89] found id: ""
	I0729 13:42:22.573070  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.573082  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:22.573091  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:22.573152  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:22.608443  301425 cri.go:89] found id: ""
	I0729 13:42:22.608476  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.608489  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:22.608496  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:22.608566  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:22.641672  301425 cri.go:89] found id: ""
	I0729 13:42:22.641704  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.641716  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:22.641724  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:22.641781  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:22.673902  301425 cri.go:89] found id: ""
	I0729 13:42:22.673934  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.673944  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:22.673952  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:22.674012  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:22.715131  301425 cri.go:89] found id: ""
	I0729 13:42:22.715165  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.715179  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:22.715187  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:22.715251  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:22.748807  301425 cri.go:89] found id: ""
	I0729 13:42:22.748838  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.748848  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:22.748856  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:22.748924  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:22.781972  301425 cri.go:89] found id: ""
	I0729 13:42:22.782002  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.782012  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:22.782021  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:22.782088  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:22.815791  301425 cri.go:89] found id: ""
	I0729 13:42:22.815823  301425 logs.go:276] 0 containers: []
	W0729 13:42:22.815834  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:22.815848  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:22.815864  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:22.873595  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:22.873631  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:22.888081  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:22.888123  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:22.959873  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:22.959899  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:22.959912  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:23.040996  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:23.041035  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:25.585159  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:25.604154  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.604240  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.645428  301425 cri.go:89] found id: ""
	I0729 13:42:25.645459  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.645466  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:42:25.645474  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.645534  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.682758  301425 cri.go:89] found id: ""
	I0729 13:42:25.682785  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.682793  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:42:25.682799  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.682864  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.724297  301425 cri.go:89] found id: ""
	I0729 13:42:25.724330  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.724341  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:42:25.724349  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.724401  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.761124  301425 cri.go:89] found id: ""
	I0729 13:42:25.761157  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.761168  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:42:25.761177  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.761229  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.802698  301425 cri.go:89] found id: ""
	I0729 13:42:25.802728  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.802741  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:42:25.802750  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.802804  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.840472  301425 cri.go:89] found id: ""
	I0729 13:42:25.840499  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.840509  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:42:25.840516  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.840586  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.875217  301425 cri.go:89] found id: ""
	I0729 13:42:25.875255  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.875267  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.875273  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:42:25.875345  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:42:25.919895  301425 cri.go:89] found id: ""
	I0729 13:42:25.919937  301425 logs.go:276] 0 containers: []
	W0729 13:42:25.919948  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:42:25.919963  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.919988  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:24.324138  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:26.324843  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:25.399606  300746 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I0729 13:42:25.405339  300746 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I0729 13:42:25.406585  300746 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 13:42:25.406607  300746 api_server.go:131] duration metric: took 3.966451518s to wait for apiserver health ...
	I0729 13:42:25.406615  300746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:42:25.406640  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:42:25.406686  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:42:25.442039  300746 cri.go:89] found id: "f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:25.442068  300746 cri.go:89] found id: ""
	I0729 13:42:25.442079  300746 logs.go:276] 1 containers: [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2]
	I0729 13:42:25.442140  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.446769  300746 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:42:25.446830  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:42:25.482122  300746 cri.go:89] found id: "f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:25.482144  300746 cri.go:89] found id: ""
	I0729 13:42:25.482156  300746 logs.go:276] 1 containers: [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6]
	I0729 13:42:25.482211  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.486666  300746 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:42:25.486729  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:42:25.534553  300746 cri.go:89] found id: "5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:25.534584  300746 cri.go:89] found id: ""
	I0729 13:42:25.534595  300746 logs.go:276] 1 containers: [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e]
	I0729 13:42:25.534657  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.539546  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:42:25.539624  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:42:25.577538  300746 cri.go:89] found id: "6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.577562  300746 cri.go:89] found id: ""
	I0729 13:42:25.577572  300746 logs.go:276] 1 containers: [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e]
	I0729 13:42:25.577635  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.582377  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:42:25.582457  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:42:25.628918  300746 cri.go:89] found id: "a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:25.628945  300746 cri.go:89] found id: ""
	I0729 13:42:25.628955  300746 logs.go:276] 1 containers: [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2]
	I0729 13:42:25.629027  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.633502  300746 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:42:25.633592  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:42:25.673133  300746 cri.go:89] found id: "5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.673156  300746 cri.go:89] found id: ""
	I0729 13:42:25.673163  300746 logs.go:276] 1 containers: [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa]
	I0729 13:42:25.673210  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.677905  300746 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:42:25.677994  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:42:25.724757  300746 cri.go:89] found id: ""
	I0729 13:42:25.724780  300746 logs.go:276] 0 containers: []
	W0729 13:42:25.724805  300746 logs.go:278] No container was found matching "kindnet"
	I0729 13:42:25.724813  300746 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:42:25.724887  300746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:42:25.775101  300746 cri.go:89] found id: "5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.775130  300746 cri.go:89] found id: "09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:25.775136  300746 cri.go:89] found id: ""
	I0729 13:42:25.775144  300746 logs.go:276] 2 containers: [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5]
	I0729 13:42:25.775219  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.782008  300746 ssh_runner.go:195] Run: which crictl
	I0729 13:42:25.787032  300746 logs.go:123] Gathering logs for kube-scheduler [6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e] ...
	I0729 13:42:25.787064  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d236da3b529e9cdeaa413f4323bf3d880644c71c3496f8c69cf4c413b670c3e"
	I0729 13:42:25.834985  300746 logs.go:123] Gathering logs for kube-controller-manager [5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa] ...
	I0729 13:42:25.835026  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c91d66f36628b736dfaa4040fb9765acd4f2e184de781ec67a54065942f0eaa"
	I0729 13:42:25.897295  300746 logs.go:123] Gathering logs for storage-provisioner [5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579] ...
	I0729 13:42:25.897338  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dcd5030f62fd91a9ed7230628c6fefa000b006fc3189718d2724394b541c579"
	I0729 13:42:25.938020  300746 logs.go:123] Gathering logs for kubelet ...
	I0729 13:42:25.938053  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:42:26.002775  300746 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:26.002808  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:26.021431  300746 logs.go:123] Gathering logs for kube-apiserver [f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2] ...
	I0729 13:42:26.021473  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08ba8d78f5055c01d658a21909e14d2c8ce671d4279eebbedfe47058989e7e2"
	I0729 13:42:26.071861  300746 logs.go:123] Gathering logs for etcd [f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6] ...
	I0729 13:42:26.071898  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f784cabd7fc33aca89916d222bb09baee447a190a6b05df6179f5e1c0fc97ea6"
	I0729 13:42:26.130018  300746 logs.go:123] Gathering logs for coredns [5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e] ...
	I0729 13:42:26.130057  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5889da7fe3143395386358a25a492a5b2bf8b38deff25033861a330d7d31394e"
	I0729 13:42:26.170233  300746 logs.go:123] Gathering logs for storage-provisioner [09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5] ...
	I0729 13:42:26.170290  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fdadca1aa7f8a2b7e7370b47659d9c9abfc20d8f0e5af459b77d7ca30b39d5"
	I0729 13:42:26.207687  300746 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.207718  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.600518  300746 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:26.600575  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:42:26.707024  300746 logs.go:123] Gathering logs for kube-proxy [a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2] ...
	I0729 13:42:26.707074  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ed90bc70759f96478b5a925d2a4e86bc0c580e78a970c5c6553f94eef367b2"
	I0729 13:42:26.753205  300746 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.753240  300746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:29.302597  300746 system_pods.go:59] 8 kube-system pods found
	I0729 13:42:29.302626  300746 system_pods.go:61] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.302630  300746 system_pods.go:61] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.302634  300746 system_pods.go:61] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.302638  300746 system_pods.go:61] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.302641  300746 system_pods.go:61] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.302644  300746 system_pods.go:61] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.302649  300746 system_pods.go:61] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.302654  300746 system_pods.go:61] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.302661  300746 system_pods.go:74] duration metric: took 3.896040202s to wait for pod list to return data ...
	I0729 13:42:29.302670  300746 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:42:29.305640  300746 default_sa.go:45] found service account: "default"
	I0729 13:42:29.305668  300746 default_sa.go:55] duration metric: took 2.989028ms for default service account to be created ...
	I0729 13:42:29.305679  300746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:42:29.310472  300746 system_pods.go:86] 8 kube-system pods found
	I0729 13:42:29.310495  300746 system_pods.go:89] "coredns-5cfdc65f69-kkrqd" [4d1ab6ca-6006-450e-8bef-bf9136e5e575] Running
	I0729 13:42:29.310500  300746 system_pods.go:89] "etcd-no-preload-566777" [43cffb00-8a2d-44bc-8ce9-f6fd5e72f728] Running
	I0729 13:42:29.310505  300746 system_pods.go:89] "kube-apiserver-no-preload-566777" [b26666a5-da6d-4db5-b4a8-fa289b194d27] Running
	I0729 13:42:29.310509  300746 system_pods.go:89] "kube-controller-manager-no-preload-566777" [77baec4e-54dc-41f5-b6e5-3cbc3ae27b15] Running
	I0729 13:42:29.310513  300746 system_pods.go:89] "kube-proxy-ql6wf" [d8ee6e47-c0f9-4c98-b294-3ee39b627884] Running
	I0729 13:42:29.310517  300746 system_pods.go:89] "kube-scheduler-no-preload-566777" [a3a6b926-a529-4a10-a84f-e9bb565ab00f] Running
	I0729 13:42:29.310523  300746 system_pods.go:89] "metrics-server-78fcd8795b-dv8pr" [0505f724-9244-4dca-9ade-6209131087e8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:42:29.310528  300746 system_pods.go:89] "storage-provisioner" [e3074247-17ba-465c-8cfe-d0fcc0241468] Running
	I0729 13:42:29.310536  300746 system_pods.go:126] duration metric: took 4.851477ms to wait for k8s-apps to be running ...
	I0729 13:42:29.310545  300746 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:42:29.310580  300746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.329123  300746 system_svc.go:56] duration metric: took 18.569258ms WaitForService to wait for kubelet
	I0729 13:42:29.329155  300746 kubeadm.go:582] duration metric: took 4m25.679589837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:42:29.329182  300746 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:42:29.332696  300746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:42:29.332726  300746 node_conditions.go:123] node cpu capacity is 2
	I0729 13:42:29.332741  300746 node_conditions.go:105] duration metric: took 3.551684ms to run NodePressure ...
	I0729 13:42:29.332756  300746 start.go:241] waiting for startup goroutines ...
	I0729 13:42:29.332770  300746 start.go:246] waiting for cluster config update ...
	I0729 13:42:29.332784  300746 start.go:255] writing updated cluster config ...
	I0729 13:42:29.333168  300746 ssh_runner.go:195] Run: rm -f paused
	I0729 13:42:29.394738  300746 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 13:42:29.396826  300746 out.go:177] * Done! kubectl is now configured to use "no-preload-566777" cluster and "default" namespace by default
	I0729 13:42:25.981964  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:42:25.982005  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:42:25.997546  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:42:25.997576  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:42:26.075879  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 13:42:26.075901  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:42:26.075917  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:42:26.158552  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:42:26.158593  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:42:28.704328  301425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:42:28.718946  301425 kubeadm.go:597] duration metric: took 4m3.546660825s to restartPrimaryControlPlane
	W0729 13:42:28.719041  301425 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 13:42:28.719086  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:42:29.251866  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:29.267009  301425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:29.277498  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:29.287980  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:29.288003  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:29.288054  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:42:29.297830  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:29.297890  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:29.308263  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:42:29.318332  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:29.318388  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:29.328684  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.339841  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:29.339894  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:29.351304  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:42:29.363901  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:29.363960  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:29.377255  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:29.453113  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:42:29.453212  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:29.609835  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:29.609970  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:29.610106  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:29.812529  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:29.814455  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:29.814551  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:29.814633  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:29.814727  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:29.814799  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:29.814915  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:29.814979  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:29.815695  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:29.816098  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:29.816602  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:29.817114  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:29.817184  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:29.817266  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:30.122967  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:30.287162  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:30.336346  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:30.516317  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:30.532829  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:30.533732  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:30.533809  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:30.672345  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:30.674334  301425 out.go:204]   - Booting up control plane ...
	I0729 13:42:30.674492  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:30.681661  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:30.681784  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:30.683350  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:30.687290  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:42:28.327998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:30.823998  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:32.824105  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:34.825475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:37.324435  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:39.824490  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:42.323305  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:44.329376  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:46.823645  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:47.980926  301044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.820407091s)
	I0729 13:42:47.981010  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:42:47.997344  301044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:42:48.007813  301044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:42:48.017519  301044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:42:48.017538  301044 kubeadm.go:157] found existing configuration files:
	
	I0729 13:42:48.017579  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 13:42:48.028739  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:42:48.028819  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:42:48.038417  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 13:42:48.047864  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:42:48.047921  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:42:48.057408  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.066977  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:42:48.067040  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:42:48.077017  301044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 13:42:48.087204  301044 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:42:48.087267  301044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:42:48.097659  301044 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:42:48.149712  301044 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:42:48.149883  301044 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:42:48.277280  301044 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:42:48.277441  301044 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:42:48.277578  301044 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:42:48.505523  301044 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:42:48.507718  301044 out.go:204]   - Generating certificates and keys ...
	I0729 13:42:48.507827  301044 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:42:48.507941  301044 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:42:48.508049  301044 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:42:48.508139  301044 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:42:48.508245  301044 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:42:48.508334  301044 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:42:48.508431  301044 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:42:48.508518  301044 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:42:48.508622  301044 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:42:48.508740  301044 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:42:48.508824  301044 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:42:48.508949  301044 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:42:48.545220  301044 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:42:48.620528  301044 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:42:48.781015  301044 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:42:49.039301  301044 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:42:49.104540  301044 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:42:49.105022  301044 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:42:49.107524  301044 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:42:49.109579  301044 out.go:204]   - Booting up control plane ...
	I0729 13:42:49.109698  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:42:49.109836  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:42:49.109924  301044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:42:49.129789  301044 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:42:49.130766  301044 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:42:49.130844  301044 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:42:49.272901  301044 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:42:49.273017  301044 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:42:50.274804  301044 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001903151s
	I0729 13:42:50.274906  301044 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:42:48.825621  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:51.324025  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.276427  301044 kubeadm.go:310] [api-check] The API server is healthy after 5.001280529s
	I0729 13:42:55.289666  301044 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:42:55.309747  301044 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:42:55.343304  301044 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:42:55.343537  301044 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-972693 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:42:55.366319  301044 kubeadm.go:310] [bootstrap-token] Using token: bvsox4.ktqddck1jfi3aduz
	I0729 13:42:55.367592  301044 out.go:204]   - Configuring RBAC rules ...
	I0729 13:42:55.367695  301044 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:42:55.380118  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:42:55.393704  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:42:55.397859  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:42:55.401567  301044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:42:55.407851  301044 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:42:55.684714  301044 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:42:56.128597  301044 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:42:56.683879  301044 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:42:56.685050  301044 kubeadm.go:310] 
	I0729 13:42:56.685127  301044 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:42:56.685137  301044 kubeadm.go:310] 
	I0729 13:42:56.685216  301044 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:42:56.685226  301044 kubeadm.go:310] 
	I0729 13:42:56.685252  301044 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:42:56.685335  301044 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:42:56.685414  301044 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:42:56.685422  301044 kubeadm.go:310] 
	I0729 13:42:56.685527  301044 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:42:56.685550  301044 kubeadm.go:310] 
	I0729 13:42:56.685607  301044 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:42:56.685617  301044 kubeadm.go:310] 
	I0729 13:42:56.685684  301044 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:42:56.685800  301044 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:42:56.685916  301044 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:42:56.685933  301044 kubeadm.go:310] 
	I0729 13:42:56.686048  301044 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:42:56.686149  301044 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:42:56.686162  301044 kubeadm.go:310] 
	I0729 13:42:56.686277  301044 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686416  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 \
	I0729 13:42:56.686449  301044 kubeadm.go:310] 	--control-plane 
	I0729 13:42:56.686462  301044 kubeadm.go:310] 
	I0729 13:42:56.686562  301044 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:42:56.686571  301044 kubeadm.go:310] 
	I0729 13:42:56.686687  301044 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token bvsox4.ktqddck1jfi3aduz \
	I0729 13:42:56.686839  301044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b7d79e618e09335f1225ee1c167f798f09f9114a4d4906909127281738ac85b4 
	I0729 13:42:56.687046  301044 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:42:56.687123  301044 cni.go:84] Creating CNI manager for ""
	I0729 13:42:56.687140  301044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:42:56.689013  301044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:42:53.324453  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:55.326475  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:42:56.690282  301044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:42:56.703026  301044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:42:56.722677  301044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:56.722757  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-972693 minikube.k8s.io/updated_at=2024_07_29T13_42_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=default-k8s-diff-port-972693 minikube.k8s.io/primary=true
	I0729 13:42:56.738921  301044 ops.go:34] apiserver oom_adj: -16
	I0729 13:42:56.902369  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.402842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.902902  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.403358  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:58.903112  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.402540  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:59.902605  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.402440  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:00.903011  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:01.403295  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:42:57.823966  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:00.323772  300705 pod_ready.go:102] pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:01.818493  300705 pod_ready.go:81] duration metric: took 4m0.000972043s for pod "metrics-server-569cc877fc-nzn76" in "kube-system" namespace to be "Ready" ...
	E0729 13:43:01.818528  300705 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 13:43:01.818537  300705 pod_ready.go:38] duration metric: took 4m4.037818748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:01.818555  300705 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:01.818589  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:01.818643  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:01.874334  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:01.874359  300705 cri.go:89] found id: ""
	I0729 13:43:01.874369  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:01.874439  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.879122  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:01.879214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:01.919779  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:01.919804  300705 cri.go:89] found id: ""
	I0729 13:43:01.919814  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:01.919874  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.924895  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:01.924963  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:01.970365  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:01.970386  300705 cri.go:89] found id: ""
	I0729 13:43:01.970394  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:01.970444  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:01.975331  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:01.975409  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:02.013029  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.013062  300705 cri.go:89] found id: ""
	I0729 13:43:02.013074  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:02.013136  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.017958  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:02.018019  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:02.062357  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.062385  300705 cri.go:89] found id: ""
	I0729 13:43:02.062394  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:02.062463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.066791  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:02.066841  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:02.103790  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:02.103812  300705 cri.go:89] found id: ""
	I0729 13:43:02.103821  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:02.103882  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.108242  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:02.108293  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:02.151089  300705 cri.go:89] found id: ""
	I0729 13:43:02.151122  300705 logs.go:276] 0 containers: []
	W0729 13:43:02.151133  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:02.151141  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:02.151204  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:02.205700  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:02.205727  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.205732  300705 cri.go:89] found id: ""
	I0729 13:43:02.205741  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:02.205790  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.210332  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:02.214889  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:02.214913  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:02.229589  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:02.229621  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:02.278361  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:02.278394  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:02.319117  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:02.319146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:02.357874  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:02.357908  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:02.402114  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:02.402146  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:02.442480  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:02.442514  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:01.903256  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.403400  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.902925  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.402616  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:03.903161  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.403255  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:04.902489  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.402506  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:05.902530  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:06.402436  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:02.953914  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:02.953961  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:03.013404  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:03.013441  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:03.151261  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:03.151294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:03.199910  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:03.199964  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:03.257103  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:03.257137  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:03.308519  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:03.308559  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:05.857929  300705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:05.878306  300705 api_server.go:72] duration metric: took 4m15.820258046s to wait for apiserver process to appear ...
	I0729 13:43:05.878338  300705 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:05.878383  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:05.878451  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:05.924031  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:05.924071  300705 cri.go:89] found id: ""
	I0729 13:43:05.924083  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:05.924151  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.929284  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:05.929363  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:05.968980  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:05.969003  300705 cri.go:89] found id: ""
	I0729 13:43:05.969010  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:05.969056  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:05.973451  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:05.973516  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:06.011760  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.011784  300705 cri.go:89] found id: ""
	I0729 13:43:06.011794  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:06.011857  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.016065  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:06.016132  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:06.066319  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.066345  300705 cri.go:89] found id: ""
	I0729 13:43:06.066353  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:06.066420  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.071060  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:06.071120  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:06.117383  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.117405  300705 cri.go:89] found id: ""
	I0729 13:43:06.117413  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:06.117463  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.121968  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:06.122053  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:06.156125  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.156151  300705 cri.go:89] found id: ""
	I0729 13:43:06.156160  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:06.156209  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.160301  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:06.160366  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:06.206751  300705 cri.go:89] found id: ""
	I0729 13:43:06.206780  300705 logs.go:276] 0 containers: []
	W0729 13:43:06.206790  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:06.206798  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:06.206860  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:06.248884  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.248918  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:06.248925  300705 cri.go:89] found id: ""
	I0729 13:43:06.248936  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:06.249006  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.253087  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:06.257229  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:06.257252  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:06.291495  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:06.291528  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:06.330190  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:06.330219  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:06.366500  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:06.366536  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:06.424871  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:06.424906  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:06.855025  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:06.855069  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:06.870025  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:06.870055  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:06.986590  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:06.986630  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:07.036972  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:07.037007  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:07.092602  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:07.092646  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:07.135326  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:07.135366  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:07.190208  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:07.190247  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:07.241865  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:07.241896  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:06.902842  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.402861  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:07.903148  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.402619  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:08.902869  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.403349  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:09.903277  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.402468  301044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:43:10.535843  301044 kubeadm.go:1113] duration metric: took 13.813154738s to wait for elevateKubeSystemPrivileges
	I0729 13:43:10.535879  301044 kubeadm.go:394] duration metric: took 5m10.527995876s to StartCluster
	I0729 13:43:10.535899  301044 settings.go:142] acquiring lock: {Name:mkaeb23e6f07ae3d313c9f12985cbb8f6b957b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.535991  301044 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:43:10.538845  301044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/kubeconfig: {Name:mk27f8d6af32549445a61ba79536433c658f6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:43:10.539141  301044 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.34 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:43:10.539343  301044 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:43:10.539513  301044 config.go:182] Loaded profile config "default-k8s-diff-port-972693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:43:10.539528  301044 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539556  301044 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539574  301044 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-972693"
	I0729 13:43:10.539587  301044 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-972693"
	I0729 13:43:10.539600  301044 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-972693"
	I0729 13:43:10.539623  301044 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.539635  301044 addons.go:243] addon metrics-server should already be in state true
	I0729 13:43:10.539692  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	W0729 13:43:10.539594  301044 addons.go:243] addon storage-provisioner should already be in state true
	I0729 13:43:10.539817  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.540342  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540368  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540380  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540399  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.540664  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.540814  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.542249  301044 out.go:177] * Verifying Kubernetes components...
	I0729 13:43:10.543974  301044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:43:10.561555  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0729 13:43:10.561585  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I0729 13:43:10.561820  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0729 13:43:10.562096  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562160  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562579  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.562694  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562711  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.562750  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.562766  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563224  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563236  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.563496  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.563516  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.563793  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563923  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.563959  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.563982  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.564526  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.564781  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.569041  301044 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-972693"
	W0729 13:43:10.569062  301044 addons.go:243] addon default-storageclass should already be in state true
	I0729 13:43:10.569091  301044 host.go:66] Checking if "default-k8s-diff-port-972693" exists ...
	I0729 13:43:10.569443  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.569462  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.580340  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I0729 13:43:10.580852  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.581371  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.581384  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.581724  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.581911  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.583937  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I0729 13:43:10.584108  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.584422  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.584864  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.584881  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.585262  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.585445  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.586285  301044 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 13:43:10.586973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.587855  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:43:10.587873  301044 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:43:10.587907  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.588885  301044 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:43:10.689091  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:43:10.689558  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:10.689837  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:10.590240  301044 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.590258  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:43:10.590275  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.592026  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42695
	I0729 13:43:10.592306  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.592778  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.592859  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.592877  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.593162  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.593295  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.593382  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.593455  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.593663  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594055  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.594082  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.594233  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.594388  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.594485  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.594621  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.594882  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.594892  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.595227  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.595663  301044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19341-233093/.minikube/bin/docker-machine-driver-kvm2
	I0729 13:43:10.595680  301044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:43:10.611094  301044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0729 13:43:10.611617  301044 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:43:10.612200  301044 main.go:141] libmachine: Using API Version  1
	I0729 13:43:10.612224  301044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:43:10.612600  301044 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:43:10.612973  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetState
	I0729 13:43:10.614541  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .DriverName
	I0729 13:43:10.614743  301044 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:10.614757  301044 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:43:10.614774  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHHostname
	I0729 13:43:10.617611  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618040  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:67:cb", ip: ""} in network mk-default-k8s-diff-port-972693: {Iface:virbr2 ExpiryTime:2024-07-29 14:29:34 +0000 UTC Type:0 Mac:52:54:00:be:67:cb Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:default-k8s-diff-port-972693 Clientid:01:52:54:00:be:67:cb}
	I0729 13:43:10.618064  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | domain default-k8s-diff-port-972693 has defined IP address 192.168.50.34 and MAC address 52:54:00:be:67:cb in network mk-default-k8s-diff-port-972693
	I0729 13:43:10.618260  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHPort
	I0729 13:43:10.618416  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHKeyPath
	I0729 13:43:10.618595  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .GetSSHUsername
	I0729 13:43:10.618754  301044 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/default-k8s-diff-port-972693/id_rsa Username:docker}
	I0729 13:43:10.791924  301044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:43:10.850744  301044 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866102  301044 node_ready.go:49] node "default-k8s-diff-port-972693" has status "Ready":"True"
	I0729 13:43:10.866137  301044 node_ready.go:38] duration metric: took 15.35404ms for node "default-k8s-diff-port-972693" to be "Ready" ...
	I0729 13:43:10.866171  301044 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:10.877661  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:10.958120  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:43:10.981335  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:43:10.981363  301044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 13:43:10.982804  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:43:11.145078  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:43:11.145108  301044 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:43:11.236628  301044 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:11.236658  301044 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:43:11.308646  301044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.315025489s)
	I0729 13:43:12.273186  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290345752s)
	I0729 13:43:12.273254  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273270  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273283  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273296  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273572  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273589  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273598  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273606  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.273704  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.273721  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.273731  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.273739  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.275558  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275601  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275616  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.275624  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.275634  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.275644  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.309442  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.309473  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.309839  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.309888  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.309909  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.464546  301044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.155855113s)
	I0729 13:43:12.464601  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.464614  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465037  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465060  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465071  301044 main.go:141] libmachine: Making call to close driver server
	I0729 13:43:12.465081  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) Calling .Close
	I0729 13:43:12.465398  301044 main.go:141] libmachine: (default-k8s-diff-port-972693) DBG | Closing plugin on server side
	I0729 13:43:12.465418  301044 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:43:12.465476  301044 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:43:12.465494  301044 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-972693"
	I0729 13:43:12.467315  301044 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 13:43:09.811571  300705 api_server.go:253] Checking apiserver healthz at https://192.168.72.207:8443/healthz ...
	I0729 13:43:09.817221  300705 api_server.go:279] https://192.168.72.207:8443/healthz returned 200:
	ok
	I0729 13:43:09.818319  300705 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:09.818342  300705 api_server.go:131] duration metric: took 3.939996032s to wait for apiserver health ...
	I0729 13:43:09.818350  300705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:09.818373  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:43:09.818425  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:43:09.861856  300705 cri.go:89] found id: "ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:09.861883  300705 cri.go:89] found id: ""
	I0729 13:43:09.861894  300705 logs.go:276] 1 containers: [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679]
	I0729 13:43:09.861962  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.867142  300705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:43:09.867216  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:43:09.909767  300705 cri.go:89] found id: "7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:09.909795  300705 cri.go:89] found id: ""
	I0729 13:43:09.909808  300705 logs.go:276] 1 containers: [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879]
	I0729 13:43:09.909877  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.914410  300705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:43:09.914482  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:43:09.953540  300705 cri.go:89] found id: "77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:09.953568  300705 cri.go:89] found id: ""
	I0729 13:43:09.953578  300705 logs.go:276] 1 containers: [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1]
	I0729 13:43:09.953637  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:09.958140  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:43:09.958214  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:43:09.999809  300705 cri.go:89] found id: "ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:09.999836  300705 cri.go:89] found id: ""
	I0729 13:43:09.999846  300705 logs.go:276] 1 containers: [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046]
	I0729 13:43:09.999911  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.004505  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:43:10.004587  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:43:10.049146  300705 cri.go:89] found id: "646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.049173  300705 cri.go:89] found id: ""
	I0729 13:43:10.049182  300705 logs.go:276] 1 containers: [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1]
	I0729 13:43:10.049252  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.053631  300705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:43:10.053698  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:43:10.090361  300705 cri.go:89] found id: "d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.090386  300705 cri.go:89] found id: ""
	I0729 13:43:10.090396  300705 logs.go:276] 1 containers: [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee]
	I0729 13:43:10.090442  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.095528  300705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:43:10.095588  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:43:10.131892  300705 cri.go:89] found id: ""
	I0729 13:43:10.131925  300705 logs.go:276] 0 containers: []
	W0729 13:43:10.131937  300705 logs.go:278] No container was found matching "kindnet"
	I0729 13:43:10.131944  300705 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 13:43:10.132008  300705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 13:43:10.169101  300705 cri.go:89] found id: "197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.169127  300705 cri.go:89] found id: "5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.169133  300705 cri.go:89] found id: ""
	I0729 13:43:10.169142  300705 logs.go:276] 2 containers: [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6]
	I0729 13:43:10.169203  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.174716  300705 ssh_runner.go:195] Run: which crictl
	I0729 13:43:10.179196  300705 logs.go:123] Gathering logs for coredns [77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1] ...
	I0729 13:43:10.179217  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77e0f82421c5b0c6b3c4a53e11c08e60906fb76e13881ace9e6527b9966c9bb1"
	I0729 13:43:10.222803  300705 logs.go:123] Gathering logs for kube-scheduler [ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046] ...
	I0729 13:43:10.222833  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed231f7f456e5f9e3621278c6b1c6abc61583676f9220157feed9da77c70f046"
	I0729 13:43:10.265944  300705 logs.go:123] Gathering logs for kube-proxy [646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1] ...
	I0729 13:43:10.265975  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646e0d1187d7e69e64e55af7aa01a0936b7eb0724113b1e258400dd9ce928ae1"
	I0729 13:43:10.310266  300705 logs.go:123] Gathering logs for kube-controller-manager [d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee] ...
	I0729 13:43:10.310294  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0bbe9cda62b653aea8a1887590ca94abbf0cfdbb9b0267f1818af7d2eabc5ee"
	I0729 13:43:10.370562  300705 logs.go:123] Gathering logs for storage-provisioner [5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6] ...
	I0729 13:43:10.370611  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b08d92f67be8b265f5acbeb26c02c1acc37d011ef5d29f0a403e728b140b7d6"
	I0729 13:43:10.415759  300705 logs.go:123] Gathering logs for container status ...
	I0729 13:43:10.415803  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:43:10.467672  300705 logs.go:123] Gathering logs for kubelet ...
	I0729 13:43:10.467702  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:43:10.531249  300705 logs.go:123] Gathering logs for dmesg ...
	I0729 13:43:10.531293  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:43:10.550454  300705 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:43:10.550485  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 13:43:10.709028  300705 logs.go:123] Gathering logs for kube-apiserver [ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679] ...
	I0729 13:43:10.709068  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9187ea50de25d8ad3a1e5976a8c85be7e1c0e615457ec1e0b7153cce5c7679"
	I0729 13:43:10.761048  300705 logs.go:123] Gathering logs for etcd [7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879] ...
	I0729 13:43:10.761093  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed77a408cabd9575d341ca1e507341b11d8c6763839c6195641a7e4dd862879"
	I0729 13:43:10.813125  300705 logs.go:123] Gathering logs for storage-provisioner [197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6] ...
	I0729 13:43:10.813169  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197f6e7a6144c9b5d42f2de8a6b02c19b9abcfcccb6f9801bda9136c2bf050e6"
	I0729 13:43:10.852581  300705 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:43:10.852608  300705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:43:13.725236  300705 system_pods.go:59] 8 kube-system pods found
	I0729 13:43:13.725272  300705 system_pods.go:61] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.725279  300705 system_pods.go:61] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.725284  300705 system_pods.go:61] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.725289  300705 system_pods.go:61] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.725293  300705 system_pods.go:61] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.725298  300705 system_pods.go:61] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.725306  300705 system_pods.go:61] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.725312  300705 system_pods.go:61] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.725322  300705 system_pods.go:74] duration metric: took 3.906966083s to wait for pod list to return data ...
	I0729 13:43:13.725335  300705 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:13.727954  300705 default_sa.go:45] found service account: "default"
	I0729 13:43:13.727984  300705 default_sa.go:55] duration metric: took 2.638639ms for default service account to be created ...
	I0729 13:43:13.728032  300705 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:13.733141  300705 system_pods.go:86] 8 kube-system pods found
	I0729 13:43:13.733163  300705 system_pods.go:89] "coredns-7db6d8ff4d-rgh5d" [b7276884-67e0-41fc-af75-2f8ba96e4c52] Running
	I0729 13:43:13.733169  300705 system_pods.go:89] "etcd-embed-certs-135920" [1f91b00a-00f7-49e2-a32b-3378c2cf9896] Running
	I0729 13:43:13.733173  300705 system_pods.go:89] "kube-apiserver-embed-certs-135920" [7cfdffba-7496-4ccb-8ab9-ad7a76635f46] Running
	I0729 13:43:13.733177  300705 system_pods.go:89] "kube-controller-manager-embed-certs-135920" [a537baff-5701-4952-bd5f-9af72963ec52] Running
	I0729 13:43:13.733181  300705 system_pods.go:89] "kube-proxy-sn8bc" [1199ef7b-b5ff-4051-abf7-eda86a891508] Running
	I0729 13:43:13.733185  300705 system_pods.go:89] "kube-scheduler-embed-certs-135920" [735d6da4-2c07-4d89-aa7e-5555148f0d74] Running
	I0729 13:43:13.733191  300705 system_pods.go:89] "metrics-server-569cc877fc-nzn76" [4ce279ad-65aa-47ce-9cb2-9a964d26950c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:13.733196  300705 system_pods.go:89] "storage-provisioner" [420625d8-a8f2-4ca4-90b0-7090c079b40e] Running
	I0729 13:43:13.733205  300705 system_pods.go:126] duration metric: took 5.16021ms to wait for k8s-apps to be running ...
	I0729 13:43:13.733213  300705 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:13.733255  300705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:13.755011  300705 system_svc.go:56] duration metric: took 21.784065ms WaitForService to wait for kubelet
	I0729 13:43:13.755042  300705 kubeadm.go:582] duration metric: took 4m23.697000108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:13.755068  300705 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:13.758549  300705 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:13.758572  300705 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:13.758586  300705 node_conditions.go:105] duration metric: took 3.512205ms to run NodePressure ...
	I0729 13:43:13.758601  300705 start.go:241] waiting for startup goroutines ...
	I0729 13:43:13.758612  300705 start.go:246] waiting for cluster config update ...
	I0729 13:43:13.758625  300705 start.go:255] writing updated cluster config ...
	I0729 13:43:13.758945  300705 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:13.810333  300705 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:13.812397  300705 out.go:177] * Done! kubectl is now configured to use "embed-certs-135920" cluster and "default" namespace by default
	I0729 13:43:12.468541  301044 addons.go:510] duration metric: took 1.929219306s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 13:43:12.887280  301044 pod_ready.go:102] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"False"
	I0729 13:43:13.386255  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.386279  301044 pod_ready.go:81] duration metric: took 2.508586907s for pod "coredns-7db6d8ff4d-t29vc" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.386291  301044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391278  301044 pod_ready.go:92] pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.391302  301044 pod_ready.go:81] duration metric: took 5.00403ms for pod "coredns-7db6d8ff4d-zlz8m" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.391313  301044 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396324  301044 pod_ready.go:92] pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.396343  301044 pod_ready.go:81] duration metric: took 5.022707ms for pod "etcd-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.396350  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403008  301044 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.403026  301044 pod_ready.go:81] duration metric: took 6.670677ms for pod "kube-apiserver-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.403035  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407836  301044 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.407856  301044 pod_ready.go:81] duration metric: took 4.814401ms for pod "kube-controller-manager-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.407868  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783140  301044 pod_ready.go:92] pod "kube-proxy-tfsk9" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:13.783168  301044 pod_ready.go:81] duration metric: took 375.291599ms for pod "kube-proxy-tfsk9" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:13.783181  301044 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182560  301044 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace has status "Ready":"True"
	I0729 13:43:14.182588  301044 pod_ready.go:81] duration metric: took 399.399691ms for pod "kube-scheduler-default-k8s-diff-port-972693" in "kube-system" namespace to be "Ready" ...
	I0729 13:43:14.182597  301044 pod_ready.go:38] duration metric: took 3.316409576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:43:14.182610  301044 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:43:14.182661  301044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:43:14.210715  301044 api_server.go:72] duration metric: took 3.671529553s to wait for apiserver process to appear ...
	I0729 13:43:14.210749  301044 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:43:14.210790  301044 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8444/healthz ...
	I0729 13:43:14.214886  301044 api_server.go:279] https://192.168.50.34:8444/healthz returned 200:
	ok
	I0729 13:43:14.215773  301044 api_server.go:141] control plane version: v1.30.3
	I0729 13:43:14.215795  301044 api_server.go:131] duration metric: took 5.0389ms to wait for apiserver health ...
	I0729 13:43:14.215802  301044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:43:14.386356  301044 system_pods.go:59] 9 kube-system pods found
	I0729 13:43:14.386389  301044 system_pods.go:61] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.386394  301044 system_pods.go:61] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.386398  301044 system_pods.go:61] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.386401  301044 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.386405  301044 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.386409  301044 system_pods.go:61] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.386412  301044 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.386417  301044 system_pods.go:61] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.386420  301044 system_pods.go:61] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.386430  301044 system_pods.go:74] duration metric: took 170.622271ms to wait for pod list to return data ...
	I0729 13:43:14.386437  301044 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:43:14.582618  301044 default_sa.go:45] found service account: "default"
	I0729 13:43:14.582643  301044 default_sa.go:55] duration metric: took 196.19918ms for default service account to be created ...
	I0729 13:43:14.582652  301044 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:43:14.785669  301044 system_pods.go:86] 9 kube-system pods found
	I0729 13:43:14.785701  301044 system_pods.go:89] "coredns-7db6d8ff4d-t29vc" [5d4d8867-523f-4115-b3dd-76a9e2765af1] Running
	I0729 13:43:14.785707  301044 system_pods.go:89] "coredns-7db6d8ff4d-zlz8m" [aecbb6c3-53d7-4497-a26f-c41a7795681a] Running
	I0729 13:43:14.785711  301044 system_pods.go:89] "etcd-default-k8s-diff-port-972693" [5bf8d679-c451-4042-a6e0-281b753e2612] Running
	I0729 13:43:14.785719  301044 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-972693" [ed1021e4-af3b-43ca-99a7-186d5f7c0f3e] Running
	I0729 13:43:14.785723  301044 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-972693" [4c793eae-5476-4f21-9a02-0221804293be] Running
	I0729 13:43:14.785727  301044 system_pods.go:89] "kube-proxy-tfsk9" [952c235c-310b-4f82-ba2d-fe06f3556a2c] Running
	I0729 13:43:14.785731  301044 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-972693" [c80e610d-fc16-4990-a481-17c4a133f8ab] Running
	I0729 13:43:14.785737  301044 system_pods.go:89] "metrics-server-569cc877fc-wwxmx" [268a70c4-a35d-45c5-9da9-4e1f7dcf52fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:43:14.785741  301044 system_pods.go:89] "storage-provisioner" [8b577293-6827-4c76-a404-6b53739ae6e9] Running
	I0729 13:43:14.785750  301044 system_pods.go:126] duration metric: took 203.092668ms to wait for k8s-apps to be running ...
	I0729 13:43:14.785756  301044 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:43:14.785801  301044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:43:14.802927  301044 system_svc.go:56] duration metric: took 17.160927ms WaitForService to wait for kubelet
	I0729 13:43:14.802957  301044 kubeadm.go:582] duration metric: took 4.263780375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:43:14.802977  301044 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:43:14.983106  301044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:43:14.983135  301044 node_conditions.go:123] node cpu capacity is 2
	I0729 13:43:14.983146  301044 node_conditions.go:105] duration metric: took 180.164781ms to run NodePressure ...
	I0729 13:43:14.983159  301044 start.go:241] waiting for startup goroutines ...
	I0729 13:43:14.983165  301044 start.go:246] waiting for cluster config update ...
	I0729 13:43:14.983175  301044 start.go:255] writing updated cluster config ...
	I0729 13:43:14.983443  301044 ssh_runner.go:195] Run: rm -f paused
	I0729 13:43:15.038438  301044 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:43:15.040318  301044 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-972693" cluster and "default" namespace by default
	I0729 13:43:15.690809  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:15.691011  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:25.691962  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:25.692244  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:43:45.693269  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:43:45.693473  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696107  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:44:25.696300  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:44:25.696307  301425 kubeadm.go:310] 
	I0729 13:44:25.696341  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:44:25.696400  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:44:25.696419  301425 kubeadm.go:310] 
	I0729 13:44:25.696463  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:44:25.696510  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:44:25.696653  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:44:25.696674  301425 kubeadm.go:310] 
	I0729 13:44:25.696818  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:44:25.696868  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:44:25.696921  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:44:25.696930  301425 kubeadm.go:310] 
	I0729 13:44:25.697076  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:44:25.697192  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:44:25.697206  301425 kubeadm.go:310] 
	I0729 13:44:25.697349  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:44:25.697459  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:44:25.697568  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:44:25.697669  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:44:25.697680  301425 kubeadm.go:310] 
	I0729 13:44:25.698359  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:44:25.698490  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:44:25.698596  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 13:44:25.698771  301425 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 13:44:25.698848  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 13:44:26.160539  301425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:44:26.175482  301425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:44:26.185562  301425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:44:26.185593  301425 kubeadm.go:157] found existing configuration files:
	
	I0729 13:44:26.185657  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:44:26.195781  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:44:26.195865  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:44:26.207404  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:44:26.217068  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:44:26.217188  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:44:26.226075  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.234622  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:44:26.234684  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:44:26.243756  301425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:44:26.252630  301425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:44:26.252695  301425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:44:26.262846  301425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:44:26.340215  301425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 13:44:26.340318  301425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:44:26.496049  301425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:44:26.496199  301425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:44:26.496327  301425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:44:26.678135  301425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:44:26.680089  301425 out.go:204]   - Generating certificates and keys ...
	I0729 13:44:26.680173  301425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:44:26.680257  301425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:44:26.680378  301425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 13:44:26.680470  301425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 13:44:26.680570  301425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 13:44:26.680653  301425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 13:44:26.680751  301425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 13:44:26.681022  301425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 13:44:26.681519  301425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 13:44:26.681876  301425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 13:44:26.681994  301425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 13:44:26.682083  301425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:44:26.762680  301425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:44:26.922517  301425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:44:26.973731  301425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:44:27.193064  301425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:44:27.216477  301425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:44:27.219036  301425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:44:27.219293  301425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:44:27.386424  301425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:44:27.388194  301425 out.go:204]   - Booting up control plane ...
	I0729 13:44:27.388340  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:44:27.390345  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:44:27.391455  301425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:44:27.392303  301425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:44:27.394301  301425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 13:45:07.396989  301425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 13:45:07.397449  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:07.397719  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:12.397982  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:12.398297  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:22.398751  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:22.399010  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:45:42.399462  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:45:42.399675  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398413  301425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 13:46:22.398684  301425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 13:46:22.398700  301425 kubeadm.go:310] 
	I0729 13:46:22.398763  301425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 13:46:22.398844  301425 kubeadm.go:310] 		timed out waiting for the condition
	I0729 13:46:22.398886  301425 kubeadm.go:310] 
	I0729 13:46:22.398948  301425 kubeadm.go:310] 	This error is likely caused by:
	I0729 13:46:22.399002  301425 kubeadm.go:310] 		- The kubelet is not running
	I0729 13:46:22.399132  301425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 13:46:22.399145  301425 kubeadm.go:310] 
	I0729 13:46:22.399287  301425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 13:46:22.399346  301425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 13:46:22.399392  301425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 13:46:22.399404  301425 kubeadm.go:310] 
	I0729 13:46:22.399530  301425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 13:46:22.399610  301425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 13:46:22.399617  301425 kubeadm.go:310] 
	I0729 13:46:22.399735  301425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 13:46:22.399844  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 13:46:22.399943  301425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 13:46:22.400021  301425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 13:46:22.400035  301425 kubeadm.go:310] 
	I0729 13:46:22.400291  301425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:46:22.400370  301425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 13:46:22.400440  301425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 13:46:22.400520  301425 kubeadm.go:394] duration metric: took 7m57.286753846s to StartCluster
	I0729 13:46:22.400612  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 13:46:22.400692  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 13:46:22.446188  301425 cri.go:89] found id: ""
	I0729 13:46:22.446216  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.446225  301425 logs.go:278] No container was found matching "kube-apiserver"
	I0729 13:46:22.446232  301425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 13:46:22.446289  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 13:46:22.484089  301425 cri.go:89] found id: ""
	I0729 13:46:22.484118  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.484128  301425 logs.go:278] No container was found matching "etcd"
	I0729 13:46:22.484135  301425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 13:46:22.484197  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 13:46:22.526817  301425 cri.go:89] found id: ""
	I0729 13:46:22.526846  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.526854  301425 logs.go:278] No container was found matching "coredns"
	I0729 13:46:22.526860  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 13:46:22.526912  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 13:46:22.564787  301425 cri.go:89] found id: ""
	I0729 13:46:22.564834  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.564846  301425 logs.go:278] No container was found matching "kube-scheduler"
	I0729 13:46:22.564854  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 13:46:22.564920  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 13:46:22.601843  301425 cri.go:89] found id: ""
	I0729 13:46:22.601881  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.601892  301425 logs.go:278] No container was found matching "kube-proxy"
	I0729 13:46:22.601900  301425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 13:46:22.601980  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 13:46:22.637420  301425 cri.go:89] found id: ""
	I0729 13:46:22.637448  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.637455  301425 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 13:46:22.637462  301425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 13:46:22.637519  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 13:46:22.672427  301425 cri.go:89] found id: ""
	I0729 13:46:22.672465  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.672476  301425 logs.go:278] No container was found matching "kindnet"
	I0729 13:46:22.672485  301425 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 13:46:22.672549  301425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 13:46:22.708256  301425 cri.go:89] found id: ""
	I0729 13:46:22.708285  301425 logs.go:276] 0 containers: []
	W0729 13:46:22.708294  301425 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 13:46:22.708306  301425 logs.go:123] Gathering logs for CRI-O ...
	I0729 13:46:22.708323  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 13:46:22.819287  301425 logs.go:123] Gathering logs for container status ...
	I0729 13:46:22.819327  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 13:46:22.859298  301425 logs.go:123] Gathering logs for kubelet ...
	I0729 13:46:22.859339  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 13:46:22.914290  301425 logs.go:123] Gathering logs for dmesg ...
	I0729 13:46:22.914342  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 13:46:22.936919  301425 logs.go:123] Gathering logs for describe nodes ...
	I0729 13:46:22.936951  301425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 13:46:23.035889  301425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0729 13:46:23.035939  301425 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 13:46:23.035991  301425 out.go:239] * 
	W0729 13:46:23.036103  301425 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.036137  301425 out.go:239] * 
	W0729 13:46:23.037370  301425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:46:23.040573  301425 out.go:177] 
	W0729 13:46:23.042130  301425 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 13:46:23.042173  301425 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 13:46:23.042193  301425 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 13:46:23.043539  301425 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.932192574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261469932160872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d5eeae3-b516-4656-b7c1-2d7b8674ac4c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.932794117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d559fdcd-1132-47d2-a1ce-320bd62b1943 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.932869811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d559fdcd-1132-47d2-a1ce-320bd62b1943 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.932903955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d559fdcd-1132-47d2-a1ce-320bd62b1943 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.964226297Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=77b7ca5a-ee70-4f76-b3bd-cca433200e95 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.964316936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=77b7ca5a-ee70-4f76-b3bd-cca433200e95 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.965579026Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a003707-efca-481f-bf82-9bb203f347c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.966093728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261469966065132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a003707-efca-481f-bf82-9bb203f347c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.966560346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15c1ce49-2574-4e99-8453-cf8159c431b4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.966618225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15c1ce49-2574-4e99-8453-cf8159c431b4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:49 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:49.966657856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=15c1ce49-2574-4e99-8453-cf8159c431b4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.002986288Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=164a8ed6-b4c0-4e84-9a40-ff23f1513eb1 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.003117260Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=164a8ed6-b4c0-4e84-9a40-ff23f1513eb1 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.004359200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=521164f0-2948-4795-9da3-07261da2dead name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.004737359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261470004710265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=521164f0-2948-4795-9da3-07261da2dead name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.005484781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4741edb1-64bf-44e8-a4d4-68241e4110ea name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.005536694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4741edb1-64bf-44e8-a4d4-68241e4110ea name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.005575021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4741edb1-64bf-44e8-a4d4-68241e4110ea name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.040521511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc992bbf-ad0a-4ca8-bcaa-2d1edf548a7f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.040610549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc992bbf-ad0a-4ca8-bcaa-2d1edf548a7f name=/runtime.v1.RuntimeService/Version
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.041907077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b449ea86-039e-43fc-9afa-61d26399fb11 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.042325785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261470042298845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b449ea86-039e-43fc-9afa-61d26399fb11 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.043007175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab156dcd-aa36-45a5-80f7-b18fc1eaf66d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.043078207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab156dcd-aa36-45a5-80f7-b18fc1eaf66d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:57:50 old-k8s-version-924039 crio[651]: time="2024-07-29 13:57:50.043120234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ab156dcd-aa36-45a5-80f7-b18fc1eaf66d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 13:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050569] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048582] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul29 13:38] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.901895] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.671429] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000011] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.092860] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.061256] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065965] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.189582] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.150988] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.251542] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.656340] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.075950] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.028528] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +9.845046] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 13:42] systemd-fstab-generator[5014]: Ignoring "noauto" option for root device
	[Jul29 13:44] systemd-fstab-generator[5301]: Ignoring "noauto" option for root device
	[  +0.070564] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:57:50 up 19 min,  0 users,  load average: 0.02, 0.07, 0.03
	Linux old-k8s-version-924039 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]: goroutine 156 [runnable]:
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000488540)
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]: goroutine 157 [select]:
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000b7e280, 0xc0001dbb01, 0xc0007f0d80, 0xc0007d19a0, 0xc00049bc40, 0xc00049bbc0)
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001dbb00, 0x0, 0x0)
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000488540)
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 29 13:57:46 old-k8s-version-924039 kubelet[6781]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 29 13:57:46 old-k8s-version-924039 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 13:57:46 old-k8s-version-924039 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 13:57:47 old-k8s-version-924039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 139.
	Jul 29 13:57:47 old-k8s-version-924039 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 13:57:47 old-k8s-version-924039 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 13:57:47 old-k8s-version-924039 kubelet[6789]: I0729 13:57:47.451295    6789 server.go:416] Version: v1.20.0
	Jul 29 13:57:47 old-k8s-version-924039 kubelet[6789]: I0729 13:57:47.451858    6789 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 13:57:47 old-k8s-version-924039 kubelet[6789]: I0729 13:57:47.454217    6789 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 13:57:47 old-k8s-version-924039 kubelet[6789]: I0729 13:57:47.455516    6789 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 29 13:57:47 old-k8s-version-924039 kubelet[6789]: W0729 13:57:47.455707    6789 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-924039 -n old-k8s-version-924039
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 2 (224.846674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-924039" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (141.51s)

                                                
                                    

Test pass (246/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 45.69
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 20.44
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 28.92
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
31 TestOffline 61.41
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 216.64
40 TestAddons/serial/GCPAuth/Namespaces 2.74
42 TestAddons/parallel/Registry 19.31
44 TestAddons/parallel/InspektorGadget 11.47
46 TestAddons/parallel/HelmTiller 13.2
48 TestAddons/parallel/CSI 68.31
49 TestAddons/parallel/Headlamp 17.13
50 TestAddons/parallel/CloudSpanner 5.54
51 TestAddons/parallel/LocalPath 61.11
52 TestAddons/parallel/NvidiaDevicePlugin 5.51
53 TestAddons/parallel/Yakd 10.75
55 TestCertOptions 45.69
56 TestCertExpiration 298.21
58 TestForceSystemdFlag 96.24
59 TestForceSystemdEnv 71.61
61 TestKVMDriverInstallOrUpdate 22.8
65 TestErrorSpam/setup 43.57
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.73
68 TestErrorSpam/pause 1.57
69 TestErrorSpam/unpause 1.57
70 TestErrorSpam/stop 5.64
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 65.36
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 42.47
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
82 TestFunctional/serial/CacheCmd/cache/add_local 2.7
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.1
88 TestFunctional/serial/MinikubeKubectlCmd 0.12
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 61.85
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.38
93 TestFunctional/serial/LogsFileCmd 1.39
94 TestFunctional/serial/InvalidService 4.96
96 TestFunctional/parallel/ConfigCmd 0.33
97 TestFunctional/parallel/DashboardCmd 19.61
98 TestFunctional/parallel/DryRun 0.28
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.75
104 TestFunctional/parallel/ServiceCmdConnect 20.76
105 TestFunctional/parallel/AddonsCmd 0.13
106 TestFunctional/parallel/PersistentVolumeClaim 48.69
108 TestFunctional/parallel/SSHCmd 0.43
109 TestFunctional/parallel/CpCmd 1.25
110 TestFunctional/parallel/MySQL 24.06
111 TestFunctional/parallel/FileSync 0.21
112 TestFunctional/parallel/CertSync 1.27
116 TestFunctional/parallel/NodeLabels 0.08
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
120 TestFunctional/parallel/License 0.59
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
122 TestFunctional/parallel/ProfileCmd/profile_list 0.28
123 TestFunctional/parallel/Version/short 0.04
124 TestFunctional/parallel/Version/components 0.69
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.56
130 TestFunctional/parallel/ImageCommands/Setup 2.62
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
135 TestFunctional/parallel/MountCmd/any-port 21.82
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.4
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.88
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.05
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.31
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
152 TestFunctional/parallel/MountCmd/specific-port 2.07
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.85
154 TestFunctional/parallel/ServiceCmd/DeployApp 23.33
155 TestFunctional/parallel/ServiceCmd/List 1.23
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.23
157 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
158 TestFunctional/parallel/ServiceCmd/Format 0.27
159 TestFunctional/parallel/ServiceCmd/URL 0.28
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.02
166 TestMultiControlPlane/serial/StartCluster 229.83
167 TestMultiControlPlane/serial/DeployApp 8.31
168 TestMultiControlPlane/serial/PingHostFromPods 1.18
169 TestMultiControlPlane/serial/AddWorkerNode 58.28
170 TestMultiControlPlane/serial/NodeLabels 0.06
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.72
173 TestMultiControlPlane/serial/StopSecondaryNode 3.91
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.38
175 TestMultiControlPlane/serial/RestartSecondaryNode 49.05
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.54
188 TestJSONOutput/start/Command 67.92
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.69
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.62
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.38
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 89.62
220 TestMountStart/serial/StartWithMountFirst 25.41
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 30.67
223 TestMountStart/serial/VerifyMountSecond 0.38
224 TestMountStart/serial/DeleteFirst 0.9
225 TestMountStart/serial/VerifyMountPostDelete 0.38
226 TestMountStart/serial/Stop 1.28
227 TestMountStart/serial/RestartStopped 25.09
228 TestMountStart/serial/VerifyMountPostStop 0.37
231 TestMultiNode/serial/FreshStart2Nodes 124.43
232 TestMultiNode/serial/DeployApp2Nodes 6.91
233 TestMultiNode/serial/PingHostFrom2Pods 0.79
234 TestMultiNode/serial/AddNode 52.08
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.22
237 TestMultiNode/serial/CopyFile 7.11
238 TestMultiNode/serial/StopNode 2.34
239 TestMultiNode/serial/StartAfterStop 40.17
241 TestMultiNode/serial/DeleteNode 2.45
243 TestMultiNode/serial/RestartMultiNode 177.39
244 TestMultiNode/serial/ValidateNameConflict 42.65
251 TestScheduledStopUnix 114.12
255 TestRunningBinaryUpgrade 218.65
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 94.99
269 TestNetworkPlugins/group/false 2.9
273 TestStoppedBinaryUpgrade/Setup 2.72
274 TestStoppedBinaryUpgrade/Upgrade 160.38
275 TestNoKubernetes/serial/StartWithStopK8s 66.24
276 TestNoKubernetes/serial/Start 30.82
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
278 TestNoKubernetes/serial/ProfileList 28.34
279 TestNoKubernetes/serial/Stop 1.32
280 TestNoKubernetes/serial/StartNoArgs 21.82
289 TestPause/serial/Start 74.45
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
293 TestNetworkPlugins/group/auto/Start 100.14
294 TestNetworkPlugins/group/kindnet/Start 118.25
295 TestNetworkPlugins/group/auto/KubeletFlags 0.2
296 TestNetworkPlugins/group/auto/NetCatPod 12.24
297 TestNetworkPlugins/group/auto/DNS 0.2
298 TestNetworkPlugins/group/auto/Localhost 0.14
299 TestNetworkPlugins/group/auto/HairPin 0.14
300 TestNetworkPlugins/group/calico/Start 90.5
301 TestNetworkPlugins/group/custom-flannel/Start 102.59
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
304 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
305 TestNetworkPlugins/group/kindnet/DNS 0.23
306 TestNetworkPlugins/group/kindnet/Localhost 0.21
307 TestNetworkPlugins/group/enable-default-cni/Start 95.12
308 TestNetworkPlugins/group/kindnet/HairPin 0.23
309 TestNetworkPlugins/group/flannel/Start 115.18
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.2
312 TestNetworkPlugins/group/calico/NetCatPod 11.22
313 TestNetworkPlugins/group/calico/DNS 0.27
314 TestNetworkPlugins/group/calico/Localhost 0.2
315 TestNetworkPlugins/group/calico/HairPin 0.16
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.29
318 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
319 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.24
320 TestNetworkPlugins/group/custom-flannel/DNS 0.24
321 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
322 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
323 TestNetworkPlugins/group/bridge/Start 74.57
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
330 TestStartStop/group/no-preload/serial/FirstStart 119.08
331 TestNetworkPlugins/group/flannel/ControllerPod 6.01
332 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
333 TestNetworkPlugins/group/flannel/NetCatPod 10.24
334 TestNetworkPlugins/group/flannel/DNS 0.16
335 TestNetworkPlugins/group/flannel/Localhost 0.14
336 TestNetworkPlugins/group/flannel/HairPin 0.13
338 TestStartStop/group/embed-certs/serial/FirstStart 72.39
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
340 TestNetworkPlugins/group/bridge/NetCatPod 10.23
341 TestNetworkPlugins/group/bridge/DNS 0.21
342 TestNetworkPlugins/group/bridge/Localhost 0.16
343 TestNetworkPlugins/group/bridge/HairPin 0.16
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.05
346 TestStartStop/group/embed-certs/serial/DeployApp 11.28
347 TestStartStop/group/no-preload/serial/DeployApp 10.31
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.27
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
359 TestStartStop/group/embed-certs/serial/SecondStart 636.49
360 TestStartStop/group/no-preload/serial/SecondStart 591.32
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 603.67
363 TestStartStop/group/old-k8s-version/serial/Stop 2.29
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
375 TestStartStop/group/newest-cni/serial/FirstStart 49.09
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
378 TestStartStop/group/newest-cni/serial/Stop 6.89
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
380 TestStartStop/group/newest-cni/serial/SecondStart 36.14
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
384 TestStartStop/group/newest-cni/serial/Pause 2.33
x
+
TestDownloadOnly/v1.20.0/json-events (45.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-754449 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-754449 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (45.694517716s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (45.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-754449
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-754449: exit status 85 (58.781322ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-754449 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |          |
	|         | -p download-only-754449        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:02:01
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:02:01.817356  240352 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:02:01.817615  240352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:02:01.817625  240352 out.go:304] Setting ErrFile to fd 2...
	I0729 12:02:01.817630  240352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:02:01.817803  240352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	W0729 12:02:01.817944  240352 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19341-233093/.minikube/config/config.json: open /home/jenkins/minikube-integration/19341-233093/.minikube/config/config.json: no such file or directory
	I0729 12:02:01.818488  240352 out.go:298] Setting JSON to true
	I0729 12:02:01.819355  240352 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6265,"bootTime":1722248257,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:02:01.819417  240352 start.go:139] virtualization: kvm guest
	I0729 12:02:01.821610  240352 out.go:97] [download-only-754449] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0729 12:02:01.821721  240352 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 12:02:01.821759  240352 notify.go:220] Checking for updates...
	I0729 12:02:01.823123  240352 out.go:169] MINIKUBE_LOCATION=19341
	I0729 12:02:01.825001  240352 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:02:01.826267  240352 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:02:01.827483  240352 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:02:01.828754  240352 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 12:02:01.831291  240352 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 12:02:01.831554  240352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:02:01.935906  240352 out.go:97] Using the kvm2 driver based on user configuration
	I0729 12:02:01.935935  240352 start.go:297] selected driver: kvm2
	I0729 12:02:01.935945  240352 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:02:01.936280  240352 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:02:01.936399  240352 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:02:01.952118  240352 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:02:01.952171  240352 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:02:01.952690  240352 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 12:02:01.952858  240352 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 12:02:01.952927  240352 cni.go:84] Creating CNI manager for ""
	I0729 12:02:01.952942  240352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:02:01.952953  240352 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:02:01.953033  240352 start.go:340] cluster config:
	{Name:download-only-754449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-754449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:02:01.953213  240352 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:02:01.955106  240352 out.go:97] Downloading VM boot image ...
	I0729 12:02:01.955132  240352 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19341-233093/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 12:02:15.724306  240352 out.go:97] Starting "download-only-754449" primary control-plane node in "download-only-754449" cluster
	I0729 12:02:15.724342  240352 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 12:02:15.886955  240352 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 12:02:15.886991  240352 cache.go:56] Caching tarball of preloaded images
	I0729 12:02:15.887154  240352 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 12:02:15.889181  240352 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 12:02:15.889202  240352 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:02:16.048996  240352 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 12:02:38.008002  240352 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:02:38.008109  240352 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:02:38.907854  240352 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 12:02:38.908214  240352 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/download-only-754449/config.json ...
	I0729 12:02:38.908246  240352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/download-only-754449/config.json: {Name:mke9bcfa4f8799942e82c3f9555778d610ef1de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:02:38.908398  240352 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 12:02:38.908556  240352 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19341-233093/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-754449 host does not exist
	  To start a cluster, run: "minikube start -p download-only-754449"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-754449
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (20.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-320141 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-320141 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (20.441957065s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (20.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-320141
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-320141: exit status 85 (60.141644ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-754449 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |                     |
	|         | -p download-only-754449        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC | 29 Jul 24 12:02 UTC |
	| delete  | -p download-only-754449        | download-only-754449 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC | 29 Jul 24 12:02 UTC |
	| start   | -o=json --download-only        | download-only-320141 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |                     |
	|         | -p download-only-320141        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:02:47
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:02:47.827300  240679 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:02:47.827444  240679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:02:47.827454  240679 out.go:304] Setting ErrFile to fd 2...
	I0729 12:02:47.827460  240679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:02:47.827656  240679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:02:47.828238  240679 out.go:298] Setting JSON to true
	I0729 12:02:47.829140  240679 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6311,"bootTime":1722248257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:02:47.829205  240679 start.go:139] virtualization: kvm guest
	I0729 12:02:47.831354  240679 out.go:97] [download-only-320141] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:02:47.831507  240679 notify.go:220] Checking for updates...
	I0729 12:02:47.832898  240679 out.go:169] MINIKUBE_LOCATION=19341
	I0729 12:02:47.834464  240679 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:02:47.835853  240679 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:02:47.837082  240679 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:02:47.838397  240679 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 12:02:47.841281  240679 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 12:02:47.841514  240679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:02:47.872262  240679 out.go:97] Using the kvm2 driver based on user configuration
	I0729 12:02:47.872297  240679 start.go:297] selected driver: kvm2
	I0729 12:02:47.872305  240679 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:02:47.872717  240679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:02:47.872852  240679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:02:47.889088  240679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:02:47.889149  240679 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:02:47.889624  240679 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 12:02:47.889791  240679 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 12:02:47.889821  240679 cni.go:84] Creating CNI manager for ""
	I0729 12:02:47.889832  240679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:02:47.889845  240679 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:02:47.889915  240679 start.go:340] cluster config:
	{Name:download-only-320141 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-320141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:02:47.890027  240679 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:02:47.891501  240679 out.go:97] Starting "download-only-320141" primary control-plane node in "download-only-320141" cluster
	I0729 12:02:47.891523  240679 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:02:48.048124  240679 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:02:48.048195  240679 cache.go:56] Caching tarball of preloaded images
	I0729 12:02:48.048392  240679 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:02:48.092641  240679 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 12:02:48.092711  240679 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:02:48.250403  240679 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:03:06.257919  240679 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:03:06.258024  240679 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:03:07.030785  240679 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:03:07.031204  240679 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/download-only-320141/config.json ...
	I0729 12:03:07.031247  240679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/download-only-320141/config.json: {Name:mkf2fd7778dbdfa0ec4b869fa65d3d887f0b4223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:03:07.031433  240679 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:03:07.031593  240679 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19341-233093/.minikube/cache/linux/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-320141 host does not exist
	  To start a cluster, run: "minikube start -p download-only-320141"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-320141
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (28.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-679044 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-679044 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (28.917186775s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (28.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-679044
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-679044: exit status 85 (59.549703ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-754449 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |                     |
	|         | -p download-only-754449             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC | 29 Jul 24 12:02 UTC |
	| delete  | -p download-only-754449             | download-only-754449 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC | 29 Jul 24 12:02 UTC |
	| start   | -o=json --download-only             | download-only-320141 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |                     |
	|         | -p download-only-320141             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC | 29 Jul 24 12:03 UTC |
	| delete  | -p download-only-320141             | download-only-320141 | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC | 29 Jul 24 12:03 UTC |
	| start   | -o=json --download-only             | download-only-679044 | jenkins | v1.33.1 | 29 Jul 24 12:03 UTC |                     |
	|         | -p download-only-679044             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:03:08
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:03:08.588963  240934 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:03:08.589077  240934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:03:08.589088  240934 out.go:304] Setting ErrFile to fd 2...
	I0729 12:03:08.589093  240934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:03:08.589290  240934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:03:08.589887  240934 out.go:298] Setting JSON to true
	I0729 12:03:08.590790  240934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6332,"bootTime":1722248257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:03:08.590858  240934 start.go:139] virtualization: kvm guest
	I0729 12:03:08.592818  240934 out.go:97] [download-only-679044] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:03:08.592957  240934 notify.go:220] Checking for updates...
	I0729 12:03:08.594339  240934 out.go:169] MINIKUBE_LOCATION=19341
	I0729 12:03:08.595690  240934 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:03:08.597210  240934 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:03:08.598422  240934 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:03:08.599622  240934 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 12:03:08.601993  240934 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 12:03:08.602230  240934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:03:08.633320  240934 out.go:97] Using the kvm2 driver based on user configuration
	I0729 12:03:08.633350  240934 start.go:297] selected driver: kvm2
	I0729 12:03:08.633357  240934 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:03:08.633678  240934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:03:08.633759  240934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19341-233093/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:03:08.648045  240934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:03:08.648104  240934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:03:08.648570  240934 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 12:03:08.648720  240934 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 12:03:08.648785  240934 cni.go:84] Creating CNI manager for ""
	I0729 12:03:08.648812  240934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:03:08.648822  240934 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:03:08.648902  240934 start.go:340] cluster config:
	{Name:download-only-679044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-679044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:03:08.648999  240934 iso.go:125] acquiring lock: {Name:mk286127eebdb995eadd3e9d10023ce1d15dc938 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:03:08.650658  240934 out.go:97] Starting "download-only-679044" primary control-plane node in "download-only-679044" cluster
	I0729 12:03:08.650673  240934 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 12:03:08.803853  240934 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 12:03:08.803888  240934 cache.go:56] Caching tarball of preloaded images
	I0729 12:03:08.804045  240934 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 12:03:08.805784  240934 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 12:03:08.805802  240934 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:03:08.959475  240934 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 12:03:24.822178  240934 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:03:24.822282  240934 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19341-233093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:03:25.562000  240934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 12:03:25.562342  240934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/download-only-679044/config.json ...
	I0729 12:03:25.562369  240934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/download-only-679044/config.json: {Name:mkf9962bd31f00ef224afae9eb7105ad85a6de53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:03:25.562524  240934 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 12:03:25.562654  240934 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19341-233093/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-679044 host does not exist
	  To start a cluster, run: "minikube start -p download-only-679044"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-679044
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-468907 --alsologtostderr --binary-mirror http://127.0.0.1:44345 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-468907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-468907
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (61.41s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-201075 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-201075 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.360567765s)
helpers_test.go:175: Cleaning up "offline-crio-201075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-201075
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-201075: (1.050221515s)
--- PASS: TestOffline (61.41s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-631322
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-631322: exit status 85 (49.524935ms)

                                                
                                                
-- stdout --
	* Profile "addons-631322" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-631322"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-631322
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-631322: exit status 85 (48.636848ms)

                                                
                                                
-- stdout --
	* Profile "addons-631322" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-631322"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (216.64s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-631322 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-631322 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m36.63586999s)
--- PASS: TestAddons/Setup (216.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.74s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-631322 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-631322 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-631322 get secret gcp-auth -n new-namespace: exit status 1 (76.702674ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-631322 logs -l app=gcp-auth -n gcp-auth
addons_test.go:670: (dbg) Run:  kubectl --context addons-631322 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.74s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.918788ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-n8scc" [01e3eb64-3cfb-4c8e-885d-d83fc4087b8b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006676357s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-74lcm" [24d73911-de6a-48f4-94d5-427b8aabe740] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006780274s
addons_test.go:342: (dbg) Run:  kubectl --context addons-631322 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-631322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-631322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.427970684s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 ip
2024/07/29 12:07:56 [DEBUG] GET http://192.168.39.55:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.31s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.47s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qgqjs" [caf6b445-c6d5-44f4-963c-181602f42a60] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004270031s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-631322
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-631322: (6.465871697s)
--- PASS: TestAddons/parallel/InspektorGadget (11.47s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.2s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.404498ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-sngfl" [9dcb8698-4a1e-4840-be97-c1bd6d3fd69a] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.006144127s
addons_test.go:475: (dbg) Run:  kubectl --context addons-631322 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-631322 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.446367027s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 13.65901ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-631322 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-631322 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [566c46b2-2d1a-4b0c-8740-4eb211daed77] Pending
helpers_test.go:344: "task-pv-pod" [566c46b2-2d1a-4b0c-8740-4eb211daed77] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [566c46b2-2d1a-4b0c-8740-4eb211daed77] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.00328857s
addons_test.go:590: (dbg) Run:  kubectl --context addons-631322 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-631322 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-631322 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-631322 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-631322 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-631322 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-631322 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [28192d9b-561f-4b65-87d2-28e34b24bb47] Pending
helpers_test.go:344: "task-pv-pod-restore" [28192d9b-561f-4b65-87d2-28e34b24bb47] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [28192d9b-561f-4b65-87d2-28e34b24bb47] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004469962s
addons_test.go:632: (dbg) Run:  kubectl --context addons-631322 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-631322 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-631322 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-631322 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.718707988s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-631322 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-f7h5m" [ab0fd630-85fb-4a20-9d29-abe07d251a64] Pending
helpers_test.go:344: "headlamp-7867546754-f7h5m" [ab0fd630-85fb-4a20-9d29-abe07d251a64] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-f7h5m" [ab0fd630-85fb-4a20-9d29-abe07d251a64] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.004558709s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (17.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-xppht" [e34e3df2-a434-4bc0-84c8-72eabfecb5e1] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003446503s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-631322
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (61.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-631322 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-631322 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [74071a1a-b1f7-4541-ab70-b52bea9249b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [74071a1a-b1f7-4541-ab70-b52bea9249b0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [74071a1a-b1f7-4541-ab70-b52bea9249b0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003838946s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-631322 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 ssh "cat /opt/local-path-provisioner/pvc-3da48a95-fd4c-467b-9806-616d63c75cdf_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-631322 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-631322 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-631322 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.260262552s)
--- PASS: TestAddons/parallel/LocalPath (61.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-m8p57" [0f635111-3024-43e1-bb48-73600f90a010] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005841363s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-631322
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-nj9vf" [16efc57f-7fcf-470b-9b53-e53db39d6a51] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004727433s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-631322 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-631322 addons disable yakd --alsologtostderr -v=1: (5.740982783s)
--- PASS: TestAddons/parallel/Yakd (10.75s)

                                                
                                    
x
+
TestCertOptions (45.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-606292 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-606292 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.402997446s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-606292 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-606292 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-606292 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-606292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-606292
--- PASS: TestCertOptions (45.69s)

                                                
                                    
x
+
TestCertExpiration (298.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-168661 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-168661 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m16.441361761s)
E0729 13:22:18.313288  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-168661 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-168661 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.165925979s)
helpers_test.go:175: Cleaning up "cert-expiration-168661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-168661
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-168661: (1.602468074s)
--- PASS: TestCertExpiration (298.21s)

                                                
                                    
x
+
TestForceSystemdFlag (96.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-454180 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-454180 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m35.224794707s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-454180 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-454180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-454180
--- PASS: TestForceSystemdFlag (96.24s)

                                                
                                    
x
+
TestForceSystemdEnv (71.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-265470 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-265470 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.817044652s)
helpers_test.go:175: Cleaning up "force-systemd-env-265470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-265470
--- PASS: TestForceSystemdEnv (71.61s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (22.8s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (22.80s)

                                                
                                    
x
+
TestErrorSpam/setup (43.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-298970 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-298970 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-298970 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-298970 --driver=kvm2  --container-runtime=crio: (43.572998972s)
--- PASS: TestErrorSpam/setup (43.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (5.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 stop: (2.310091446s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 stop: (1.359535557s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-298970 --log_dir /tmp/nospam-298970 stop: (1.973572952s)
--- PASS: TestErrorSpam/stop (5.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19341-233093/.minikube/files/etc/test/nested/copy/240340/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311529 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0729 12:17:18.313119  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:18.319049  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:18.329283  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:18.349588  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:18.389856  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:18.470229  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:18.630632  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:18.951213  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:19.592143  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:20.873011  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:23.434106  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-311529 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m5.363754151s)
--- PASS: TestFunctional/serial/StartWithProxy (65.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311529 --alsologtostderr -v=8
E0729 12:17:28.555093  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:38.795484  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:17:59.275675  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-311529 --alsologtostderr -v=8: (42.466684947s)
functional_test.go:659: soft start took 42.467416178s for "functional-311529" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-311529 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 cache add registry.k8s.io/pause:3.1: (1.016278409s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 cache add registry.k8s.io/pause:3.3: (1.112018565s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 cache add registry.k8s.io/pause:latest: (1.017941225s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-311529 /tmp/TestFunctionalserialCacheCmdcacheadd_local1993259401/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 cache add minikube-local-cache-test:functional-311529
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 cache add minikube-local-cache-test:functional-311529: (2.370420706s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 cache delete minikube-local-cache-test:functional-311529
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-311529
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311529 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.721062ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 kubectl -- --context functional-311529 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-311529 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (61.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311529 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 12:18:40.236977  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-311529 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m1.854375761s)
functional_test.go:757: restart took 1m1.854497584s for "functional-311529" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (61.85s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-311529 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 logs: (1.380760934s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 logs --file /tmp/TestFunctionalserialLogsFileCmd2024560578/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 logs --file /tmp/TestFunctionalserialLogsFileCmd2024560578/001/logs.txt: (1.387827053s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-311529 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-311529
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-311529: exit status 115 (266.914218ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.95:31898 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-311529 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-311529 delete -f testdata/invalidsvc.yaml: (1.49344172s)
--- PASS: TestFunctional/serial/InvalidService (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311529 config get cpus: exit status 14 (61.439077ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311529 config get cpus: exit status 14 (49.001164ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-311529 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-311529 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 251150: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.61s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311529 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-311529 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.808325ms)

                                                
                                                
-- stdout --
	* [functional-311529] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:19:51.780541  250764 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:19:51.780651  250764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:19:51.780661  250764 out.go:304] Setting ErrFile to fd 2...
	I0729 12:19:51.780666  250764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:19:51.780914  250764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:19:51.781397  250764 out.go:298] Setting JSON to false
	I0729 12:19:51.782363  250764 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7335,"bootTime":1722248257,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:19:51.782419  250764 start.go:139] virtualization: kvm guest
	I0729 12:19:51.784780  250764 out.go:177] * [functional-311529] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:19:51.786201  250764 notify.go:220] Checking for updates...
	I0729 12:19:51.786266  250764 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:19:51.787665  250764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:19:51.789087  250764 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:19:51.790385  250764 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:19:51.791776  250764 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:19:51.793440  250764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:19:51.795072  250764 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:19:51.795477  250764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:19:51.795562  250764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:19:51.811273  250764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0729 12:19:51.811734  250764 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:19:51.812219  250764 main.go:141] libmachine: Using API Version  1
	I0729 12:19:51.812258  250764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:19:51.812611  250764 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:19:51.812825  250764 main.go:141] libmachine: (functional-311529) Calling .DriverName
	I0729 12:19:51.813086  250764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:19:51.813446  250764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:19:51.813483  250764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:19:51.829403  250764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I0729 12:19:51.829844  250764 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:19:51.830299  250764 main.go:141] libmachine: Using API Version  1
	I0729 12:19:51.830318  250764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:19:51.830668  250764 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:19:51.830854  250764 main.go:141] libmachine: (functional-311529) Calling .DriverName
	I0729 12:19:51.863750  250764 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:19:51.865230  250764 start.go:297] selected driver: kvm2
	I0729 12:19:51.865248  250764 start.go:901] validating driver "kvm2" against &{Name:functional-311529 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-311529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:19:51.865415  250764 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:19:51.867897  250764 out.go:177] 
	W0729 12:19:51.869129  250764 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 12:19:51.870517  250764 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311529 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311529 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-311529 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.905998ms)

                                                
                                                
-- stdout --
	* [functional-311529] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:19:28.785375  249700 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:19:28.785657  249700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:19:28.785667  249700 out.go:304] Setting ErrFile to fd 2...
	I0729 12:19:28.785671  249700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:19:28.785920  249700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:19:28.786429  249700 out.go:298] Setting JSON to false
	I0729 12:19:28.787377  249700 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7312,"bootTime":1722248257,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:19:28.787437  249700 start.go:139] virtualization: kvm guest
	I0729 12:19:28.789502  249700 out.go:177] * [functional-311529] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 12:19:28.790760  249700 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 12:19:28.790770  249700 notify.go:220] Checking for updates...
	I0729 12:19:28.792087  249700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:19:28.793364  249700 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 12:19:28.794620  249700 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 12:19:28.795915  249700 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:19:28.797372  249700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:19:28.799239  249700 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:19:28.799862  249700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:19:28.799926  249700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:19:28.815142  249700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35625
	I0729 12:19:28.815608  249700 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:19:28.816339  249700 main.go:141] libmachine: Using API Version  1
	I0729 12:19:28.816376  249700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:19:28.816688  249700 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:19:28.816918  249700 main.go:141] libmachine: (functional-311529) Calling .DriverName
	I0729 12:19:28.817173  249700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:19:28.817461  249700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:19:28.817498  249700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:19:28.833905  249700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I0729 12:19:28.834288  249700 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:19:28.834705  249700 main.go:141] libmachine: Using API Version  1
	I0729 12:19:28.834841  249700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:19:28.835165  249700 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:19:28.835445  249700 main.go:141] libmachine: (functional-311529) Calling .DriverName
	I0729 12:19:28.872060  249700 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 12:19:28.873341  249700 start.go:297] selected driver: kvm2
	I0729 12:19:28.873360  249700 start.go:901] validating driver "kvm2" against &{Name:functional-311529 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-311529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:19:28.873493  249700 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:19:28.875545  249700 out.go:177] 
	W0729 12:19:28.876897  249700 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 12:19:28.878208  249700 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-311529 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-311529 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-k5zt8" [7a1de9ee-b02d-4519-a290-6849d1bc4323] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-k5zt8" [7a1de9ee-b02d-4519-a290-6849d1bc4323] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.217499778s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.95:32052
functional_test.go:1671: http://192.168.39.95:32052: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-k5zt8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.95:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.95:32052
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5a74b969-4a2d-4c65-ae4c-390327774eaa] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007056989s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-311529 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-311529 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-311529 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-311529 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-311529 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5a46c81e-201e-4231-8291-33aecf2d3127] Pending
helpers_test.go:344: "sp-pod" [5a46c81e-201e-4231-8291-33aecf2d3127] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5a46c81e-201e-4231-8291-33aecf2d3127] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.003911666s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-311529 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-311529 delete -f testdata/storage-provisioner/pod.yaml
E0729 12:20:02.158165  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-311529 delete -f testdata/storage-provisioner/pod.yaml: (1.289237595s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-311529 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f5704abd-effc-4658-b9a7-38b471285bf1] Pending
helpers_test.go:344: "sp-pod" [f5704abd-effc-4658-b9a7-38b471285bf1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f5704abd-effc-4658-b9a7-38b471285bf1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004947531s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-311529 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.69s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh -n functional-311529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 cp functional-311529:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1612098797/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh -n functional-311529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh -n functional-311529 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-311529 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-s4dln" [a801baf2-883b-4f1c-814b-200c8d3080bf] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-s4dln" [a801baf2-883b-4f1c-814b-200c8d3080bf] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.005318653s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-311529 exec mysql-64454c8b5c-s4dln -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-311529 exec mysql-64454c8b5c-s4dln -- mysql -ppassword -e "show databases;": exit status 1 (137.887437ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-311529 exec mysql-64454c8b5c-s4dln -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-311529 exec mysql-64454c8b5c-s4dln -- mysql -ppassword -e "show databases;": exit status 1 (520.250965ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-311529 exec mysql-64454c8b5c-s4dln -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/240340/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo cat /etc/test/nested/copy/240340/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/240340.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo cat /etc/ssl/certs/240340.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/240340.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo cat /usr/share/ca-certificates/240340.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2403402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo cat /etc/ssl/certs/2403402.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2403402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo cat /usr/share/ca-certificates/2403402.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-311529 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311529 ssh "sudo systemctl is-active docker": exit status 1 (205.317721ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311529 ssh "sudo systemctl is-active containerd": exit status 1 (227.794123ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "223.209731ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "58.40218ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-311529 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-311529
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-311529
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-311529 image ls --format short --alsologtostderr:
I0729 12:20:06.528426  251373 out.go:291] Setting OutFile to fd 1 ...
I0729 12:20:06.528677  251373 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:20:06.528687  251373 out.go:304] Setting ErrFile to fd 2...
I0729 12:20:06.528691  251373 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:20:06.528898  251373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
I0729 12:20:06.529425  251373 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 12:20:06.529529  251373 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 12:20:06.529880  251373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 12:20:06.529928  251373 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:20:06.545476  251373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
I0729 12:20:06.546007  251373 main.go:141] libmachine: () Calling .GetVersion
I0729 12:20:06.546631  251373 main.go:141] libmachine: Using API Version  1
I0729 12:20:06.546659  251373 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:20:06.547064  251373 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:20:06.547299  251373 main.go:141] libmachine: (functional-311529) Calling .GetState
I0729 12:20:06.549294  251373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 12:20:06.549342  251373 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:20:06.564120  251373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
I0729 12:20:06.564546  251373 main.go:141] libmachine: () Calling .GetVersion
I0729 12:20:06.565095  251373 main.go:141] libmachine: Using API Version  1
I0729 12:20:06.565122  251373 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:20:06.565460  251373 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:20:06.565658  251373 main.go:141] libmachine: (functional-311529) Calling .DriverName
I0729 12:20:06.565960  251373 ssh_runner.go:195] Run: systemctl --version
I0729 12:20:06.565992  251373 main.go:141] libmachine: (functional-311529) Calling .GetSSHHostname
I0729 12:20:06.568657  251373 main.go:141] libmachine: (functional-311529) DBG | domain functional-311529 has defined MAC address 52:54:00:4a:ad:1a in network mk-functional-311529
I0729 12:20:06.569086  251373 main.go:141] libmachine: (functional-311529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ad:1a", ip: ""} in network mk-functional-311529: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:35 +0000 UTC Type:0 Mac:52:54:00:4a:ad:1a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-311529 Clientid:01:52:54:00:4a:ad:1a}
I0729 12:20:06.569115  251373 main.go:141] libmachine: (functional-311529) DBG | domain functional-311529 has defined IP address 192.168.39.95 and MAC address 52:54:00:4a:ad:1a in network mk-functional-311529
I0729 12:20:06.569241  251373 main.go:141] libmachine: (functional-311529) Calling .GetSSHPort
I0729 12:20:06.569404  251373 main.go:141] libmachine: (functional-311529) Calling .GetSSHKeyPath
I0729 12:20:06.569551  251373 main.go:141] libmachine: (functional-311529) Calling .GetSSHUsername
I0729 12:20:06.569714  251373 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/functional-311529/id_rsa Username:docker}
I0729 12:20:06.649766  251373 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 12:20:06.694581  251373 main.go:141] libmachine: Making call to close driver server
I0729 12:20:06.694606  251373 main.go:141] libmachine: (functional-311529) Calling .Close
I0729 12:20:06.694919  251373 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:20:06.694937  251373 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:20:06.694946  251373 main.go:141] libmachine: Making call to close driver server
I0729 12:20:06.694954  251373 main.go:141] libmachine: (functional-311529) Calling .Close
I0729 12:20:06.695218  251373 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:20:06.695232  251373 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:20:06.695253  251373 main.go:141] libmachine: (functional-311529) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-311529 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kicbase/echo-server           | functional-311529  | 9056ab77afb8e | 4.94MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| localhost/minikube-local-cache-test     | functional-311529  | 219fa3132ae19 | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-311529 image ls --format table --alsologtostderr:
I0729 12:20:06.954553  251421 out.go:291] Setting OutFile to fd 1 ...
I0729 12:20:06.954697  251421 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:20:06.954709  251421 out.go:304] Setting ErrFile to fd 2...
I0729 12:20:06.954721  251421 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:20:06.954898  251421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
I0729 12:20:06.955462  251421 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 12:20:06.955557  251421 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 12:20:06.955894  251421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 12:20:06.955936  251421 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:20:06.970787  251421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
I0729 12:20:06.971302  251421 main.go:141] libmachine: () Calling .GetVersion
I0729 12:20:06.971824  251421 main.go:141] libmachine: Using API Version  1
I0729 12:20:06.971845  251421 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:20:06.972198  251421 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:20:06.972389  251421 main.go:141] libmachine: (functional-311529) Calling .GetState
I0729 12:20:06.974310  251421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 12:20:06.974351  251421 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:20:06.989770  251421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43581
I0729 12:20:06.990222  251421 main.go:141] libmachine: () Calling .GetVersion
I0729 12:20:06.990686  251421 main.go:141] libmachine: Using API Version  1
I0729 12:20:06.990708  251421 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:20:06.991028  251421 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:20:06.991219  251421 main.go:141] libmachine: (functional-311529) Calling .DriverName
I0729 12:20:06.991435  251421 ssh_runner.go:195] Run: systemctl --version
I0729 12:20:06.991467  251421 main.go:141] libmachine: (functional-311529) Calling .GetSSHHostname
I0729 12:20:06.994504  251421 main.go:141] libmachine: (functional-311529) DBG | domain functional-311529 has defined MAC address 52:54:00:4a:ad:1a in network mk-functional-311529
I0729 12:20:06.994938  251421 main.go:141] libmachine: (functional-311529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ad:1a", ip: ""} in network mk-functional-311529: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:35 +0000 UTC Type:0 Mac:52:54:00:4a:ad:1a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-311529 Clientid:01:52:54:00:4a:ad:1a}
I0729 12:20:06.994972  251421 main.go:141] libmachine: (functional-311529) DBG | domain functional-311529 has defined IP address 192.168.39.95 and MAC address 52:54:00:4a:ad:1a in network mk-functional-311529
I0729 12:20:06.995133  251421 main.go:141] libmachine: (functional-311529) Calling .GetSSHPort
I0729 12:20:06.995300  251421 main.go:141] libmachine: (functional-311529) Calling .GetSSHKeyPath
I0729 12:20:06.995475  251421 main.go:141] libmachine: (functional-311529) Calling .GetSSHUsername
I0729 12:20:06.995645  251421 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/functional-311529/id_rsa Username:docker}
I0729 12:20:07.071906  251421 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 12:20:07.109805  251421 main.go:141] libmachine: Making call to close driver server
I0729 12:20:07.109827  251421 main.go:141] libmachine: (functional-311529) Calling .Close
I0729 12:20:07.110207  251421 main.go:141] libmachine: (functional-311529) DBG | Closing plugin on server side
I0729 12:20:07.110248  251421 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:20:07.110269  251421 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:20:07.110285  251421 main.go:141] libmachine: Making call to close driver server
I0729 12:20:07.110296  251421 main.go:141] libmachine: (functional-311529) Calling .Close
I0729 12:20:07.110543  251421 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:20:07.110557  251421 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-311529 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3edc18e7b76722eb2eb37a0858c
09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-311529"],"size":"4943877"},{"id":"219fa3132ae19ac3d269e1dac22de4db227d5d10c2ea0b84b6e54d5f31952e7d","repoDigests":["localhost/minikube-local-cache-test@sha256:52167a91defee33a156f4ddb306ebd8af486f
9b4110cca2d95b4bda75a815dbf"],"repoTags":["localhost/minikube-local-cache-test:functional-311529"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d
4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashbo
ard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f
4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.i
o/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-311529 image ls --format json --alsologtostderr:
I0729 12:20:06.744122  251397 out.go:291] Setting OutFile to fd 1 ...
I0729 12:20:06.744398  251397 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:20:06.744410  251397 out.go:304] Setting ErrFile to fd 2...
I0729 12:20:06.744414  251397 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:20:06.744671  251397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
I0729 12:20:06.745315  251397 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 12:20:06.745426  251397 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 12:20:06.745853  251397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 12:20:06.745904  251397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:20:06.760808  251397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
I0729 12:20:06.761356  251397 main.go:141] libmachine: () Calling .GetVersion
I0729 12:20:06.762033  251397 main.go:141] libmachine: Using API Version  1
I0729 12:20:06.762054  251397 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:20:06.762434  251397 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:20:06.762635  251397 main.go:141] libmachine: (functional-311529) Calling .GetState
I0729 12:20:06.764661  251397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 12:20:06.764704  251397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:20:06.780478  251397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
I0729 12:20:06.780960  251397 main.go:141] libmachine: () Calling .GetVersion
I0729 12:20:06.781436  251397 main.go:141] libmachine: Using API Version  1
I0729 12:20:06.781458  251397 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:20:06.781801  251397 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:20:06.781996  251397 main.go:141] libmachine: (functional-311529) Calling .DriverName
I0729 12:20:06.782223  251397 ssh_runner.go:195] Run: systemctl --version
I0729 12:20:06.782247  251397 main.go:141] libmachine: (functional-311529) Calling .GetSSHHostname
I0729 12:20:06.784725  251397 main.go:141] libmachine: (functional-311529) DBG | domain functional-311529 has defined MAC address 52:54:00:4a:ad:1a in network mk-functional-311529
I0729 12:20:06.785172  251397 main.go:141] libmachine: (functional-311529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ad:1a", ip: ""} in network mk-functional-311529: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:35 +0000 UTC Type:0 Mac:52:54:00:4a:ad:1a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-311529 Clientid:01:52:54:00:4a:ad:1a}
I0729 12:20:06.785202  251397 main.go:141] libmachine: (functional-311529) DBG | domain functional-311529 has defined IP address 192.168.39.95 and MAC address 52:54:00:4a:ad:1a in network mk-functional-311529
I0729 12:20:06.785328  251397 main.go:141] libmachine: (functional-311529) Calling .GetSSHPort
I0729 12:20:06.785505  251397 main.go:141] libmachine: (functional-311529) Calling .GetSSHKeyPath
I0729 12:20:06.785667  251397 main.go:141] libmachine: (functional-311529) Calling .GetSSHUsername
I0729 12:20:06.785811  251397 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/functional-311529/id_rsa Username:docker}
I0729 12:20:06.863687  251397 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 12:20:06.907352  251397 main.go:141] libmachine: Making call to close driver server
I0729 12:20:06.907368  251397 main.go:141] libmachine: (functional-311529) Calling .Close
I0729 12:20:06.907629  251397 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:20:06.907646  251397 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:20:06.907658  251397 main.go:141] libmachine: Making call to close driver server
I0729 12:20:06.907668  251397 main.go:141] libmachine: (functional-311529) Calling .Close
I0729 12:20:06.907907  251397 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:20:06.907921  251397 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-311529 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-311529
size: "4943877"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 219fa3132ae19ac3d269e1dac22de4db227d5d10c2ea0b84b6e54d5f31952e7d
repoDigests:
- localhost/minikube-local-cache-test@sha256:52167a91defee33a156f4ddb306ebd8af486f9b4110cca2d95b4bda75a815dbf
repoTags:
- localhost/minikube-local-cache-test:functional-311529
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-311529 image ls --format yaml --alsologtostderr:
I0729 12:20:07.156303  251444 out.go:291] Setting OutFile to fd 1 ...
I0729 12:20:07.156599  251444 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:20:07.156615  251444 out.go:304] Setting ErrFile to fd 2...
I0729 12:20:07.156622  251444 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:20:07.157207  251444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
I0729 12:20:07.157787  251444 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 12:20:07.157892  251444 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 12:20:07.158224  251444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 12:20:07.158266  251444 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:20:07.174330  251444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33979
I0729 12:20:07.174817  251444 main.go:141] libmachine: () Calling .GetVersion
I0729 12:20:07.175452  251444 main.go:141] libmachine: Using API Version  1
I0729 12:20:07.175486  251444 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:20:07.175831  251444 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:20:07.176054  251444 main.go:141] libmachine: (functional-311529) Calling .GetState
I0729 12:20:07.177852  251444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 12:20:07.177889  251444 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:20:07.193258  251444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
I0729 12:20:07.193679  251444 main.go:141] libmachine: () Calling .GetVersion
I0729 12:20:07.194251  251444 main.go:141] libmachine: Using API Version  1
I0729 12:20:07.194280  251444 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:20:07.194659  251444 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:20:07.194876  251444 main.go:141] libmachine: (functional-311529) Calling .DriverName
I0729 12:20:07.195123  251444 ssh_runner.go:195] Run: systemctl --version
I0729 12:20:07.195150  251444 main.go:141] libmachine: (functional-311529) Calling .GetSSHHostname
I0729 12:20:07.198024  251444 main.go:141] libmachine: (functional-311529) DBG | domain functional-311529 has defined MAC address 52:54:00:4a:ad:1a in network mk-functional-311529
I0729 12:20:07.198488  251444 main.go:141] libmachine: (functional-311529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ad:1a", ip: ""} in network mk-functional-311529: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:35 +0000 UTC Type:0 Mac:52:54:00:4a:ad:1a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-311529 Clientid:01:52:54:00:4a:ad:1a}
I0729 12:20:07.198517  251444 main.go:141] libmachine: (functional-311529) DBG | domain functional-311529 has defined IP address 192.168.39.95 and MAC address 52:54:00:4a:ad:1a in network mk-functional-311529
I0729 12:20:07.198651  251444 main.go:141] libmachine: (functional-311529) Calling .GetSSHPort
I0729 12:20:07.198812  251444 main.go:141] libmachine: (functional-311529) Calling .GetSSHKeyPath
I0729 12:20:07.198962  251444 main.go:141] libmachine: (functional-311529) Calling .GetSSHUsername
I0729 12:20:07.199103  251444 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/functional-311529/id_rsa Username:docker}
I0729 12:20:07.279827  251444 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 12:20:07.322332  251444 main.go:141] libmachine: Making call to close driver server
I0729 12:20:07.322352  251444 main.go:141] libmachine: (functional-311529) Calling .Close
I0729 12:20:07.322684  251444 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:20:07.322703  251444 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:20:07.322716  251444 main.go:141] libmachine: (functional-311529) DBG | Closing plugin on server side
I0729 12:20:07.322727  251444 main.go:141] libmachine: Making call to close driver server
I0729 12:20:07.322757  251444 main.go:141] libmachine: (functional-311529) Calling .Close
I0729 12:20:07.323009  251444 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:20:07.323084  251444 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:20:07.323091  251444 main.go:141] libmachine: (functional-311529) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311529 ssh pgrep buildkitd: exit status 1 (186.657956ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image build -t localhost/my-image:functional-311529 testdata/build --alsologtostderr
2024/07/29 12:20:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 image build -t localhost/my-image:functional-311529 testdata/build --alsologtostderr: (4.149599047s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-311529 image build -t localhost/my-image:functional-311529 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0bc57a0105a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-311529
--> d1e8d10f247
Successfully tagged localhost/my-image:functional-311529
d1e8d10f247006583420ff3dc4a47fa6d01a89e91e98737249d6189903637be7
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-311529 image build -t localhost/my-image:functional-311529 testdata/build --alsologtostderr:
I0729 12:20:07.561884  251499 out.go:291] Setting OutFile to fd 1 ...
I0729 12:20:07.562018  251499 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:20:07.562027  251499 out.go:304] Setting ErrFile to fd 2...
I0729 12:20:07.562031  251499 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:20:07.562209  251499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
I0729 12:20:07.562808  251499 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 12:20:07.563378  251499 config.go:182] Loaded profile config "functional-311529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 12:20:07.563755  251499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 12:20:07.563793  251499 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:20:07.579887  251499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
I0729 12:20:07.580437  251499 main.go:141] libmachine: () Calling .GetVersion
I0729 12:20:07.581110  251499 main.go:141] libmachine: Using API Version  1
I0729 12:20:07.581135  251499 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:20:07.581498  251499 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:20:07.581910  251499 main.go:141] libmachine: (functional-311529) Calling .GetState
I0729 12:20:07.583705  251499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 12:20:07.583741  251499 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:20:07.598584  251499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
I0729 12:20:07.599075  251499 main.go:141] libmachine: () Calling .GetVersion
I0729 12:20:07.599527  251499 main.go:141] libmachine: Using API Version  1
I0729 12:20:07.599544  251499 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:20:07.599898  251499 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:20:07.600106  251499 main.go:141] libmachine: (functional-311529) Calling .DriverName
I0729 12:20:07.600336  251499 ssh_runner.go:195] Run: systemctl --version
I0729 12:20:07.600368  251499 main.go:141] libmachine: (functional-311529) Calling .GetSSHHostname
I0729 12:20:07.602957  251499 main.go:141] libmachine: (functional-311529) DBG | domain functional-311529 has defined MAC address 52:54:00:4a:ad:1a in network mk-functional-311529
I0729 12:20:07.603304  251499 main.go:141] libmachine: (functional-311529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ad:1a", ip: ""} in network mk-functional-311529: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:35 +0000 UTC Type:0 Mac:52:54:00:4a:ad:1a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-311529 Clientid:01:52:54:00:4a:ad:1a}
I0729 12:20:07.603330  251499 main.go:141] libmachine: (functional-311529) DBG | domain functional-311529 has defined IP address 192.168.39.95 and MAC address 52:54:00:4a:ad:1a in network mk-functional-311529
I0729 12:20:07.603461  251499 main.go:141] libmachine: (functional-311529) Calling .GetSSHPort
I0729 12:20:07.603629  251499 main.go:141] libmachine: (functional-311529) Calling .GetSSHKeyPath
I0729 12:20:07.603765  251499 main.go:141] libmachine: (functional-311529) Calling .GetSSHUsername
I0729 12:20:07.603904  251499 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/functional-311529/id_rsa Username:docker}
I0729 12:20:07.683737  251499 build_images.go:161] Building image from path: /tmp/build.3928543812.tar
I0729 12:20:07.683808  251499 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 12:20:07.696513  251499 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3928543812.tar
I0729 12:20:07.701516  251499 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3928543812.tar: stat -c "%s %y" /var/lib/minikube/build/build.3928543812.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3928543812.tar': No such file or directory
I0729 12:20:07.701549  251499 ssh_runner.go:362] scp /tmp/build.3928543812.tar --> /var/lib/minikube/build/build.3928543812.tar (3072 bytes)
I0729 12:20:07.731802  251499 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3928543812
I0729 12:20:07.743253  251499 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3928543812 -xf /var/lib/minikube/build/build.3928543812.tar
I0729 12:20:07.754694  251499 crio.go:315] Building image: /var/lib/minikube/build/build.3928543812
I0729 12:20:07.754763  251499 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-311529 /var/lib/minikube/build/build.3928543812 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0729 12:20:11.628998  251499 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-311529 /var/lib/minikube/build/build.3928543812 --cgroup-manager=cgroupfs: (3.87420936s)
I0729 12:20:11.629078  251499 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3928543812
I0729 12:20:11.639960  251499 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3928543812.tar
I0729 12:20:11.660388  251499 build_images.go:217] Built localhost/my-image:functional-311529 from /tmp/build.3928543812.tar
I0729 12:20:11.660422  251499 build_images.go:133] succeeded building to: functional-311529
I0729 12:20:11.660427  251499 build_images.go:134] failed building to: 
I0729 12:20:11.660452  251499 main.go:141] libmachine: Making call to close driver server
I0729 12:20:11.660462  251499 main.go:141] libmachine: (functional-311529) Calling .Close
I0729 12:20:11.660798  251499 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:20:11.660818  251499 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:20:11.660826  251499 main.go:141] libmachine: Making call to close driver server
I0729 12:20:11.660828  251499 main.go:141] libmachine: (functional-311529) DBG | Closing plugin on server side
I0729 12:20:11.660834  251499 main.go:141] libmachine: (functional-311529) Calling .Close
I0729 12:20:11.661074  251499 main.go:141] libmachine: (functional-311529) DBG | Closing plugin on server side
I0729 12:20:11.661080  251499 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:20:11.661110  251499 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (2.603634723s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-311529
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "230.955153ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "47.848095ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdany-port3688848095/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722255568884406334" to /tmp/TestFunctionalparallelMountCmdany-port3688848095/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722255568884406334" to /tmp/TestFunctionalparallelMountCmdany-port3688848095/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722255568884406334" to /tmp/TestFunctionalparallelMountCmdany-port3688848095/001/test-1722255568884406334
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (202.977279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 12:19 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 12:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 12:19 test-1722255568884406334
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh cat /mount-9p/test-1722255568884406334
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-311529 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e53aa09a-13e7-4f66-a0a0-7c768484bc38] Pending
helpers_test.go:344: "busybox-mount" [e53aa09a-13e7-4f66-a0a0-7c768484bc38] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e53aa09a-13e7-4f66-a0a0-7c768484bc38] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e53aa09a-13e7-4f66-a0a0-7c768484bc38] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 19.361441329s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-311529 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdany-port3688848095/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image load --daemon docker.io/kicbase/echo-server:functional-311529 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 image load --daemon docker.io/kicbase/echo-server:functional-311529 --alsologtostderr: (1.194551683s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image load --daemon docker.io/kicbase/echo-server:functional-311529 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:234: (dbg) Done: docker pull docker.io/kicbase/echo-server:latest: (1.186343466s)
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-311529
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image load --daemon docker.io/kicbase/echo-server:functional-311529 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 image load --daemon docker.io/kicbase/echo-server:functional-311529 --alsologtostderr: (4.36300702s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image save docker.io/kicbase/echo-server:functional-311529 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 image save docker.io/kicbase/echo-server:functional-311529 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.046423432s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image rm docker.io/kicbase/echo-server:functional-311529 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.080090701s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-311529
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 image save --daemon docker.io/kicbase/echo-server:functional-311529 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-311529
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdspecific-port4166717834/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.5283ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdspecific-port4166717834/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311529 ssh "sudo umount -f /mount-9p": exit status 1 (212.284412ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-311529 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdspecific-port4166717834/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895247260/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895247260/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895247260/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T" /mount1: exit status 1 (346.167127ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-311529 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895247260/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895247260/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895247260/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-311529 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-311529 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-4bxdt" [4ef8a63f-5706-4835-812f-edb82cadd7ed] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-4bxdt" [4ef8a63f-5706-4835-812f-edb82cadd7ed] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.00438948s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 service list: (1.230245213s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-311529 service list -o json: (1.229689157s)
functional_test.go:1490: Took "1.229817061s" to run "out/minikube-linux-amd64 -p functional-311529 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.95:31492
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-311529 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.95:31492
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-311529
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-311529
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-311529
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (229.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-767488 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 12:22:18.313800  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 12:22:45.999211  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-767488 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m49.151650819s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (229.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-767488 -- rollout status deployment/busybox: (6.076639161s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-4ppv4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-q6fnx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-trgfp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-4ppv4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-q6fnx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-trgfp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-4ppv4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-q6fnx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-trgfp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-4ppv4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-4ppv4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-q6fnx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-q6fnx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-trgfp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-767488 -- exec busybox-fc5497c4f-trgfp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-767488 -v=7 --alsologtostderr
E0729 12:24:27.881482  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:27.886789  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:27.897041  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:27.917285  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:27.957614  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:28.037957  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:28.198348  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:28.519046  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:29.159256  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:30.439793  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:33.000375  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:38.120819  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:24:48.361451  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
E0729 12:25:08.842415  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-767488 -v=7 --alsologtostderr: (57.452403742s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-767488 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp testdata/cp-test.txt ha-767488:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488:/home/docker/cp-test.txt ha-767488-m02:/home/docker/cp-test_ha-767488_ha-767488-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m02 "sudo cat /home/docker/cp-test_ha-767488_ha-767488-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488:/home/docker/cp-test.txt ha-767488-m03:/home/docker/cp-test_ha-767488_ha-767488-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m03 "sudo cat /home/docker/cp-test_ha-767488_ha-767488-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488:/home/docker/cp-test.txt ha-767488-m04:/home/docker/cp-test_ha-767488_ha-767488-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m04 "sudo cat /home/docker/cp-test_ha-767488_ha-767488-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp testdata/cp-test.txt ha-767488-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m02:/home/docker/cp-test.txt ha-767488:/home/docker/cp-test_ha-767488-m02_ha-767488.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488 "sudo cat /home/docker/cp-test_ha-767488-m02_ha-767488.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m02:/home/docker/cp-test.txt ha-767488-m03:/home/docker/cp-test_ha-767488-m02_ha-767488-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m03 "sudo cat /home/docker/cp-test_ha-767488-m02_ha-767488-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m02:/home/docker/cp-test.txt ha-767488-m04:/home/docker/cp-test_ha-767488-m02_ha-767488-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m04 "sudo cat /home/docker/cp-test_ha-767488-m02_ha-767488-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp testdata/cp-test.txt ha-767488-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt ha-767488:/home/docker/cp-test_ha-767488-m03_ha-767488.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488 "sudo cat /home/docker/cp-test_ha-767488-m03_ha-767488.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt ha-767488-m02:/home/docker/cp-test_ha-767488-m03_ha-767488-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m02 "sudo cat /home/docker/cp-test_ha-767488-m03_ha-767488-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m03:/home/docker/cp-test.txt ha-767488-m04:/home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m04 "sudo cat /home/docker/cp-test_ha-767488-m03_ha-767488-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp testdata/cp-test.txt ha-767488-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2644279276/001/cp-test_ha-767488-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt ha-767488:/home/docker/cp-test_ha-767488-m04_ha-767488.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488 "sudo cat /home/docker/cp-test_ha-767488-m04_ha-767488.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt ha-767488-m02:/home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m02 "sudo cat /home/docker/cp-test_ha-767488-m04_ha-767488-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 cp ha-767488-m04:/home/docker/cp-test.txt ha-767488-m03:/home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 ssh -n ha-767488-m03 "sudo cat /home/docker/cp-test_ha-767488-m04_ha-767488-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (3.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 node stop m02 -v=7 --alsologtostderr: (3.276696533s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr: exit status 7 (629.257752ms)

                                                
                                                
-- stdout --
	ha-767488
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-767488-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767488-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-767488-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:25:36.684712  256128 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:25:36.685006  256128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:25:36.685038  256128 out.go:304] Setting ErrFile to fd 2...
	I0729 12:25:36.685053  256128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:25:36.685239  256128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:25:36.685427  256128 out.go:298] Setting JSON to false
	I0729 12:25:36.685460  256128 mustload.go:65] Loading cluster: ha-767488
	I0729 12:25:36.685566  256128 notify.go:220] Checking for updates...
	I0729 12:25:36.685873  256128 config.go:182] Loaded profile config "ha-767488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:25:36.685895  256128 status.go:255] checking status of ha-767488 ...
	I0729 12:25:36.686317  256128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:25:36.686384  256128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:25:36.704161  256128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42871
	I0729 12:25:36.704685  256128 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:36.705370  256128 main.go:141] libmachine: Using API Version  1
	I0729 12:25:36.705391  256128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:36.705757  256128 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:36.705948  256128 main.go:141] libmachine: (ha-767488) Calling .GetState
	I0729 12:25:36.707692  256128 status.go:330] ha-767488 host status = "Running" (err=<nil>)
	I0729 12:25:36.707713  256128 host.go:66] Checking if "ha-767488" exists ...
	I0729 12:25:36.708162  256128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:25:36.708216  256128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:25:36.723855  256128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39039
	I0729 12:25:36.724377  256128 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:36.724861  256128 main.go:141] libmachine: Using API Version  1
	I0729 12:25:36.724888  256128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:36.725225  256128 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:36.725423  256128 main.go:141] libmachine: (ha-767488) Calling .GetIP
	I0729 12:25:36.728337  256128 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:25:36.728814  256128 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:25:36.728844  256128 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:25:36.728944  256128 host.go:66] Checking if "ha-767488" exists ...
	I0729 12:25:36.729289  256128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:25:36.729351  256128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:25:36.744807  256128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0729 12:25:36.745214  256128 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:36.745677  256128 main.go:141] libmachine: Using API Version  1
	I0729 12:25:36.745697  256128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:36.745972  256128 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:36.746156  256128 main.go:141] libmachine: (ha-767488) Calling .DriverName
	I0729 12:25:36.746332  256128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:25:36.746354  256128 main.go:141] libmachine: (ha-767488) Calling .GetSSHHostname
	I0729 12:25:36.748971  256128 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:25:36.749386  256128 main.go:141] libmachine: (ha-767488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:5c:b8", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:20:37 +0000 UTC Type:0 Mac:52:54:00:f5:5c:b8 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-767488 Clientid:01:52:54:00:f5:5c:b8}
	I0729 12:25:36.749419  256128 main.go:141] libmachine: (ha-767488) DBG | domain ha-767488 has defined IP address 192.168.39.217 and MAC address 52:54:00:f5:5c:b8 in network mk-ha-767488
	I0729 12:25:36.749532  256128 main.go:141] libmachine: (ha-767488) Calling .GetSSHPort
	I0729 12:25:36.749733  256128 main.go:141] libmachine: (ha-767488) Calling .GetSSHKeyPath
	I0729 12:25:36.749874  256128 main.go:141] libmachine: (ha-767488) Calling .GetSSHUsername
	I0729 12:25:36.750008  256128 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488/id_rsa Username:docker}
	I0729 12:25:36.852933  256128 ssh_runner.go:195] Run: systemctl --version
	I0729 12:25:36.858903  256128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:25:36.874635  256128 kubeconfig.go:125] found "ha-767488" server: "https://192.168.39.254:8443"
	I0729 12:25:36.874667  256128 api_server.go:166] Checking apiserver status ...
	I0729 12:25:36.874697  256128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:25:36.889901  256128 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0729 12:25:36.899564  256128 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 12:25:36.899622  256128 ssh_runner.go:195] Run: ls
	I0729 12:25:36.904491  256128 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 12:25:36.908599  256128 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 12:25:36.908621  256128 status.go:422] ha-767488 apiserver status = Running (err=<nil>)
	I0729 12:25:36.908636  256128 status.go:257] ha-767488 status: &{Name:ha-767488 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:25:36.908660  256128 status.go:255] checking status of ha-767488-m02 ...
	I0729 12:25:36.909047  256128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:25:36.909094  256128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:25:36.924233  256128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I0729 12:25:36.924635  256128 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:36.925112  256128 main.go:141] libmachine: Using API Version  1
	I0729 12:25:36.925137  256128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:36.925475  256128 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:36.925680  256128 main.go:141] libmachine: (ha-767488-m02) Calling .GetState
	I0729 12:25:36.927103  256128 status.go:330] ha-767488-m02 host status = "Stopped" (err=<nil>)
	I0729 12:25:36.927116  256128 status.go:343] host is not running, skipping remaining checks
	I0729 12:25:36.927122  256128 status.go:257] ha-767488-m02 status: &{Name:ha-767488-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:25:36.927136  256128 status.go:255] checking status of ha-767488-m03 ...
	I0729 12:25:36.927394  256128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:25:36.927455  256128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:25:36.943096  256128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41423
	I0729 12:25:36.943523  256128 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:36.943942  256128 main.go:141] libmachine: Using API Version  1
	I0729 12:25:36.943963  256128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:36.944284  256128 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:36.944440  256128 main.go:141] libmachine: (ha-767488-m03) Calling .GetState
	I0729 12:25:36.945832  256128 status.go:330] ha-767488-m03 host status = "Running" (err=<nil>)
	I0729 12:25:36.945849  256128 host.go:66] Checking if "ha-767488-m03" exists ...
	I0729 12:25:36.946144  256128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:25:36.946176  256128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:25:36.960372  256128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32883
	I0729 12:25:36.960832  256128 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:36.961380  256128 main.go:141] libmachine: Using API Version  1
	I0729 12:25:36.961436  256128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:36.961720  256128 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:36.961938  256128 main.go:141] libmachine: (ha-767488-m03) Calling .GetIP
	I0729 12:25:36.964486  256128 main.go:141] libmachine: (ha-767488-m03) DBG | domain ha-767488-m03 has defined MAC address 52:54:00:05:1f:d0 in network mk-ha-767488
	I0729 12:25:36.964984  256128 main.go:141] libmachine: (ha-767488-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:1f:d0", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:23:09 +0000 UTC Type:0 Mac:52:54:00:05:1f:d0 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-767488-m03 Clientid:01:52:54:00:05:1f:d0}
	I0729 12:25:36.965010  256128 main.go:141] libmachine: (ha-767488-m03) DBG | domain ha-767488-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:05:1f:d0 in network mk-ha-767488
	I0729 12:25:36.965170  256128 host.go:66] Checking if "ha-767488-m03" exists ...
	I0729 12:25:36.965569  256128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:25:36.965612  256128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:25:36.981065  256128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43223
	I0729 12:25:36.981426  256128 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:36.981909  256128 main.go:141] libmachine: Using API Version  1
	I0729 12:25:36.981928  256128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:36.982227  256128 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:36.982434  256128 main.go:141] libmachine: (ha-767488-m03) Calling .DriverName
	I0729 12:25:36.982634  256128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:25:36.982659  256128 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHHostname
	I0729 12:25:36.985310  256128 main.go:141] libmachine: (ha-767488-m03) DBG | domain ha-767488-m03 has defined MAC address 52:54:00:05:1f:d0 in network mk-ha-767488
	I0729 12:25:36.985682  256128 main.go:141] libmachine: (ha-767488-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:1f:d0", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:23:09 +0000 UTC Type:0 Mac:52:54:00:05:1f:d0 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-767488-m03 Clientid:01:52:54:00:05:1f:d0}
	I0729 12:25:36.985704  256128 main.go:141] libmachine: (ha-767488-m03) DBG | domain ha-767488-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:05:1f:d0 in network mk-ha-767488
	I0729 12:25:36.985804  256128 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHPort
	I0729 12:25:36.985959  256128 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHKeyPath
	I0729 12:25:36.986093  256128 main.go:141] libmachine: (ha-767488-m03) Calling .GetSSHUsername
	I0729 12:25:36.986254  256128 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488-m03/id_rsa Username:docker}
	I0729 12:25:37.068246  256128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:25:37.084299  256128 kubeconfig.go:125] found "ha-767488" server: "https://192.168.39.254:8443"
	I0729 12:25:37.084330  256128 api_server.go:166] Checking apiserver status ...
	I0729 12:25:37.084363  256128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:25:37.098792  256128 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1512/cgroup
	W0729 12:25:37.108705  256128 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1512/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 12:25:37.108752  256128 ssh_runner.go:195] Run: ls
	I0729 12:25:37.113263  256128 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 12:25:37.117634  256128 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 12:25:37.117661  256128 status.go:422] ha-767488-m03 apiserver status = Running (err=<nil>)
	I0729 12:25:37.117669  256128 status.go:257] ha-767488-m03 status: &{Name:ha-767488-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:25:37.117685  256128 status.go:255] checking status of ha-767488-m04 ...
	I0729 12:25:37.117951  256128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:25:37.117981  256128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:25:37.133233  256128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0729 12:25:37.133616  256128 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:37.134049  256128 main.go:141] libmachine: Using API Version  1
	I0729 12:25:37.134072  256128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:37.134407  256128 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:37.134597  256128 main.go:141] libmachine: (ha-767488-m04) Calling .GetState
	I0729 12:25:37.136111  256128 status.go:330] ha-767488-m04 host status = "Running" (err=<nil>)
	I0729 12:25:37.136127  256128 host.go:66] Checking if "ha-767488-m04" exists ...
	I0729 12:25:37.136425  256128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:25:37.136460  256128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:25:37.151543  256128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38459
	I0729 12:25:37.151917  256128 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:37.152418  256128 main.go:141] libmachine: Using API Version  1
	I0729 12:25:37.152440  256128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:37.152746  256128 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:37.152969  256128 main.go:141] libmachine: (ha-767488-m04) Calling .GetIP
	I0729 12:25:37.155644  256128 main.go:141] libmachine: (ha-767488-m04) DBG | domain ha-767488-m04 has defined MAC address 52:54:00:d8:66:33 in network mk-ha-767488
	I0729 12:25:37.156114  256128 main.go:141] libmachine: (ha-767488-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:66:33", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:24:37 +0000 UTC Type:0 Mac:52:54:00:d8:66:33 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:ha-767488-m04 Clientid:01:52:54:00:d8:66:33}
	I0729 12:25:37.156143  256128 main.go:141] libmachine: (ha-767488-m04) DBG | domain ha-767488-m04 has defined IP address 192.168.39.181 and MAC address 52:54:00:d8:66:33 in network mk-ha-767488
	I0729 12:25:37.156287  256128 host.go:66] Checking if "ha-767488-m04" exists ...
	I0729 12:25:37.156572  256128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:25:37.156615  256128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:25:37.171441  256128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0729 12:25:37.171862  256128 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:37.172305  256128 main.go:141] libmachine: Using API Version  1
	I0729 12:25:37.172327  256128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:37.172621  256128 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:37.172811  256128 main.go:141] libmachine: (ha-767488-m04) Calling .DriverName
	I0729 12:25:37.172995  256128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:25:37.173014  256128 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHHostname
	I0729 12:25:37.175657  256128 main.go:141] libmachine: (ha-767488-m04) DBG | domain ha-767488-m04 has defined MAC address 52:54:00:d8:66:33 in network mk-ha-767488
	I0729 12:25:37.176047  256128 main.go:141] libmachine: (ha-767488-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:66:33", ip: ""} in network mk-ha-767488: {Iface:virbr1 ExpiryTime:2024-07-29 13:24:37 +0000 UTC Type:0 Mac:52:54:00:d8:66:33 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:ha-767488-m04 Clientid:01:52:54:00:d8:66:33}
	I0729 12:25:37.176073  256128 main.go:141] libmachine: (ha-767488-m04) DBG | domain ha-767488-m04 has defined IP address 192.168.39.181 and MAC address 52:54:00:d8:66:33 in network mk-ha-767488
	I0729 12:25:37.176239  256128 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHPort
	I0729 12:25:37.176419  256128 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHKeyPath
	I0729 12:25:37.176578  256128 main.go:141] libmachine: (ha-767488-m04) Calling .GetSSHUsername
	I0729 12:25:37.176702  256128 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/ha-767488-m04/id_rsa Username:docker}
	I0729 12:25:37.256103  256128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:25:37.270941  256128 status.go:257] ha-767488-m04 status: &{Name:ha-767488-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (3.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 node start m02 -v=7 --alsologtostderr
E0729 12:25:49.803493  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-767488 node start m02 -v=7 --alsologtostderr: (48.114744325s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-767488 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.92s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-655289 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0729 12:50:21.362146  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-655289 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.92328225s)
--- PASS: TestJSONOutput/start/Command (67.92s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-655289 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-655289 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-655289 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-655289 --output=json --user=testUser: (7.376654008s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-083491 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-083491 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.191775ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fe01bce8-61e0-437b-8ce1-0b7a46f9c794","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-083491] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4445e5cb-0bd2-4a39-9c31-5e1c2eb70c50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19341"}}
	{"specversion":"1.0","id":"0ce86e8e-1888-4db2-b599-0198c966db95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f0f99666-e68a-470e-ae8a-6e58dbf9e1e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig"}}
	{"specversion":"1.0","id":"3dc3c02f-3a38-4e47-93d3-38ccfa598f92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube"}}
	{"specversion":"1.0","id":"9140c767-7e94-4d69-ad6d-b46faf4db9bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cc06021e-8c72-491a-b2be-f1784bc85534","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7dc6ada2-77c0-4d56-b532-5289fd46e3de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-083491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-083491
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-485742 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-485742 --driver=kvm2  --container-runtime=crio: (43.127547686s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-489022 --driver=kvm2  --container-runtime=crio
E0729 12:52:18.313796  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-489022 --driver=kvm2  --container-runtime=crio: (44.061382745s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-485742
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-489022
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-489022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-489022
helpers_test.go:175: Cleaning up "first-485742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-485742
--- PASS: TestMinikubeProfile (89.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-309951 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-309951 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.413210197s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-309951 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-309951 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-329356 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-329356 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.670271809s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-329356 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-329356 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-309951 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-329356 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-329356 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-329356
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-329356: (1.27708383s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-329356
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-329356: (24.085173321s)
--- PASS: TestMountStart/serial/RestartStopped (25.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-329356 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-329356 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (124.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-786745 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 12:54:27.880700  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-786745 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m4.016935781s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (124.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-786745 -- rollout status deployment/busybox: (5.326454925s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- exec busybox-fc5497c4f-cmdrr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- exec busybox-fc5497c4f-tkss8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- exec busybox-fc5497c4f-cmdrr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- exec busybox-fc5497c4f-tkss8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- exec busybox-fc5497c4f-cmdrr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- exec busybox-fc5497c4f-tkss8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- exec busybox-fc5497c4f-cmdrr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- exec busybox-fc5497c4f-cmdrr -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- exec busybox-fc5497c4f-tkss8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-786745 -- exec busybox-fc5497c4f-tkss8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-786745 -v 3 --alsologtostderr
E0729 12:57:18.313642  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-786745 -v 3 --alsologtostderr: (51.525373007s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-786745 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp testdata/cp-test.txt multinode-786745:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp multinode-786745:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1996079696/001/cp-test_multinode-786745.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp multinode-786745:/home/docker/cp-test.txt multinode-786745-m02:/home/docker/cp-test_multinode-786745_multinode-786745-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m02 "sudo cat /home/docker/cp-test_multinode-786745_multinode-786745-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp multinode-786745:/home/docker/cp-test.txt multinode-786745-m03:/home/docker/cp-test_multinode-786745_multinode-786745-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m03 "sudo cat /home/docker/cp-test_multinode-786745_multinode-786745-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp testdata/cp-test.txt multinode-786745-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp multinode-786745-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1996079696/001/cp-test_multinode-786745-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m02 "sudo cat /home/docker/cp-test.txt"
E0729 12:57:30.927449  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp multinode-786745-m02:/home/docker/cp-test.txt multinode-786745:/home/docker/cp-test_multinode-786745-m02_multinode-786745.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745 "sudo cat /home/docker/cp-test_multinode-786745-m02_multinode-786745.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp multinode-786745-m02:/home/docker/cp-test.txt multinode-786745-m03:/home/docker/cp-test_multinode-786745-m02_multinode-786745-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m03 "sudo cat /home/docker/cp-test_multinode-786745-m02_multinode-786745-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp testdata/cp-test.txt multinode-786745-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp multinode-786745-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1996079696/001/cp-test_multinode-786745-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp multinode-786745-m03:/home/docker/cp-test.txt multinode-786745:/home/docker/cp-test_multinode-786745-m03_multinode-786745.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745 "sudo cat /home/docker/cp-test_multinode-786745-m03_multinode-786745.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 cp multinode-786745-m03:/home/docker/cp-test.txt multinode-786745-m02:/home/docker/cp-test_multinode-786745-m03_multinode-786745-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 ssh -n multinode-786745-m02 "sudo cat /home/docker/cp-test_multinode-786745-m03_multinode-786745-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-786745 node stop m03: (1.514533376s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-786745 status: exit status 7 (413.834445ms)

                                                
                                                
-- stdout --
	multinode-786745
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-786745-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-786745-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-786745 status --alsologtostderr: exit status 7 (411.17738ms)

                                                
                                                
-- stdout --
	multinode-786745
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-786745-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-786745-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:57:36.659605  270027 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:57:36.660169  270027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:57:36.660188  270027 out.go:304] Setting ErrFile to fd 2...
	I0729 12:57:36.660196  270027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:57:36.660687  270027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 12:57:36.661167  270027 out.go:298] Setting JSON to false
	I0729 12:57:36.661196  270027 mustload.go:65] Loading cluster: multinode-786745
	I0729 12:57:36.661234  270027 notify.go:220] Checking for updates...
	I0729 12:57:36.661546  270027 config.go:182] Loaded profile config "multinode-786745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:57:36.661561  270027 status.go:255] checking status of multinode-786745 ...
	I0729 12:57:36.661977  270027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:57:36.662023  270027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:57:36.678152  270027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0729 12:57:36.678544  270027 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:57:36.679123  270027 main.go:141] libmachine: Using API Version  1
	I0729 12:57:36.679151  270027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:57:36.679520  270027 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:57:36.679739  270027 main.go:141] libmachine: (multinode-786745) Calling .GetState
	I0729 12:57:36.681345  270027 status.go:330] multinode-786745 host status = "Running" (err=<nil>)
	I0729 12:57:36.681366  270027 host.go:66] Checking if "multinode-786745" exists ...
	I0729 12:57:36.681648  270027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:57:36.681688  270027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:57:36.696896  270027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35183
	I0729 12:57:36.697327  270027 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:57:36.697780  270027 main.go:141] libmachine: Using API Version  1
	I0729 12:57:36.697799  270027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:57:36.698198  270027 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:57:36.698407  270027 main.go:141] libmachine: (multinode-786745) Calling .GetIP
	I0729 12:57:36.701257  270027 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 12:57:36.701678  270027 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 12:57:36.701711  270027 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 12:57:36.701799  270027 host.go:66] Checking if "multinode-786745" exists ...
	I0729 12:57:36.702089  270027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:57:36.702144  270027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:57:36.718780  270027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0729 12:57:36.719182  270027 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:57:36.719678  270027 main.go:141] libmachine: Using API Version  1
	I0729 12:57:36.719700  270027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:57:36.720084  270027 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:57:36.720269  270027 main.go:141] libmachine: (multinode-786745) Calling .DriverName
	I0729 12:57:36.720448  270027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:57:36.720473  270027 main.go:141] libmachine: (multinode-786745) Calling .GetSSHHostname
	I0729 12:57:36.723061  270027 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 12:57:36.723459  270027 main.go:141] libmachine: (multinode-786745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:e7:93", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:37 +0000 UTC Type:0 Mac:52:54:00:0e:e7:93 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-786745 Clientid:01:52:54:00:0e:e7:93}
	I0729 12:57:36.723484  270027 main.go:141] libmachine: (multinode-786745) DBG | domain multinode-786745 has defined IP address 192.168.39.10 and MAC address 52:54:00:0e:e7:93 in network mk-multinode-786745
	I0729 12:57:36.723634  270027 main.go:141] libmachine: (multinode-786745) Calling .GetSSHPort
	I0729 12:57:36.723812  270027 main.go:141] libmachine: (multinode-786745) Calling .GetSSHKeyPath
	I0729 12:57:36.723948  270027 main.go:141] libmachine: (multinode-786745) Calling .GetSSHUsername
	I0729 12:57:36.724094  270027 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/multinode-786745/id_rsa Username:docker}
	I0729 12:57:36.800385  270027 ssh_runner.go:195] Run: systemctl --version
	I0729 12:57:36.806473  270027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:57:36.821113  270027 kubeconfig.go:125] found "multinode-786745" server: "https://192.168.39.10:8443"
	I0729 12:57:36.821142  270027 api_server.go:166] Checking apiserver status ...
	I0729 12:57:36.821174  270027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:57:36.834340  270027 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W0729 12:57:36.843807  270027 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 12:57:36.843856  270027 ssh_runner.go:195] Run: ls
	I0729 12:57:36.848150  270027 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0729 12:57:36.852104  270027 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0729 12:57:36.852127  270027 status.go:422] multinode-786745 apiserver status = Running (err=<nil>)
	I0729 12:57:36.852141  270027 status.go:257] multinode-786745 status: &{Name:multinode-786745 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:57:36.852166  270027 status.go:255] checking status of multinode-786745-m02 ...
	I0729 12:57:36.852536  270027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:57:36.852588  270027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:57:36.868369  270027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0729 12:57:36.868871  270027 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:57:36.869359  270027 main.go:141] libmachine: Using API Version  1
	I0729 12:57:36.869384  270027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:57:36.869700  270027 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:57:36.869894  270027 main.go:141] libmachine: (multinode-786745-m02) Calling .GetState
	I0729 12:57:36.871364  270027 status.go:330] multinode-786745-m02 host status = "Running" (err=<nil>)
	I0729 12:57:36.871390  270027 host.go:66] Checking if "multinode-786745-m02" exists ...
	I0729 12:57:36.871732  270027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:57:36.871773  270027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:57:36.886970  270027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0729 12:57:36.887318  270027 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:57:36.887764  270027 main.go:141] libmachine: Using API Version  1
	I0729 12:57:36.887787  270027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:57:36.888092  270027 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:57:36.888255  270027 main.go:141] libmachine: (multinode-786745-m02) Calling .GetIP
	I0729 12:57:36.890698  270027 main.go:141] libmachine: (multinode-786745-m02) DBG | domain multinode-786745-m02 has defined MAC address 52:54:00:33:cf:b5 in network mk-multinode-786745
	I0729 12:57:36.891095  270027 main.go:141] libmachine: (multinode-786745-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:cf:b5", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:55:49 +0000 UTC Type:0 Mac:52:54:00:33:cf:b5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-786745-m02 Clientid:01:52:54:00:33:cf:b5}
	I0729 12:57:36.891118  270027 main.go:141] libmachine: (multinode-786745-m02) DBG | domain multinode-786745-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:33:cf:b5 in network mk-multinode-786745
	I0729 12:57:36.891235  270027 host.go:66] Checking if "multinode-786745-m02" exists ...
	I0729 12:57:36.891514  270027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:57:36.891544  270027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:57:36.905963  270027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
	I0729 12:57:36.906382  270027 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:57:36.906800  270027 main.go:141] libmachine: Using API Version  1
	I0729 12:57:36.906822  270027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:57:36.907130  270027 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:57:36.907325  270027 main.go:141] libmachine: (multinode-786745-m02) Calling .DriverName
	I0729 12:57:36.907501  270027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:57:36.907517  270027 main.go:141] libmachine: (multinode-786745-m02) Calling .GetSSHHostname
	I0729 12:57:36.909841  270027 main.go:141] libmachine: (multinode-786745-m02) DBG | domain multinode-786745-m02 has defined MAC address 52:54:00:33:cf:b5 in network mk-multinode-786745
	I0729 12:57:36.910278  270027 main.go:141] libmachine: (multinode-786745-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:cf:b5", ip: ""} in network mk-multinode-786745: {Iface:virbr1 ExpiryTime:2024-07-29 13:55:49 +0000 UTC Type:0 Mac:52:54:00:33:cf:b5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-786745-m02 Clientid:01:52:54:00:33:cf:b5}
	I0729 12:57:36.910303  270027 main.go:141] libmachine: (multinode-786745-m02) DBG | domain multinode-786745-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:33:cf:b5 in network mk-multinode-786745
	I0729 12:57:36.910443  270027 main.go:141] libmachine: (multinode-786745-m02) Calling .GetSSHPort
	I0729 12:57:36.910599  270027 main.go:141] libmachine: (multinode-786745-m02) Calling .GetSSHKeyPath
	I0729 12:57:36.910762  270027 main.go:141] libmachine: (multinode-786745-m02) Calling .GetSSHUsername
	I0729 12:57:36.910888  270027 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19341-233093/.minikube/machines/multinode-786745-m02/id_rsa Username:docker}
	I0729 12:57:36.995746  270027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:57:37.009343  270027 status.go:257] multinode-786745-m02 status: &{Name:multinode-786745-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:57:37.009375  270027 status.go:255] checking status of multinode-786745-m03 ...
	I0729 12:57:37.009719  270027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:57:37.009763  270027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:57:37.025576  270027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0729 12:57:37.026010  270027 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:57:37.026451  270027 main.go:141] libmachine: Using API Version  1
	I0729 12:57:37.026475  270027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:57:37.026866  270027 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:57:37.027042  270027 main.go:141] libmachine: (multinode-786745-m03) Calling .GetState
	I0729 12:57:37.028514  270027 status.go:330] multinode-786745-m03 host status = "Stopped" (err=<nil>)
	I0729 12:57:37.028529  270027 status.go:343] host is not running, skipping remaining checks
	I0729 12:57:37.028535  270027 status.go:257] multinode-786745-m03 status: &{Name:multinode-786745-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-786745 node start m03 -v=7 --alsologtostderr: (39.5468507s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-786745 node delete m03: (1.933410679s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (177.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-786745 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 13:07:01.365363  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
E0729 13:07:18.313924  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-786745 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m56.862640803s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-786745 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (177.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-786745
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-786745-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-786745-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.644603ms)

                                                
                                                
-- stdout --
	* [multinode-786745-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-786745-m02' is duplicated with machine name 'multinode-786745-m02' in profile 'multinode-786745'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-786745-m03 --driver=kvm2  --container-runtime=crio
E0729 13:09:27.881103  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-786745-m03 --driver=kvm2  --container-runtime=crio: (41.54352295s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-786745
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-786745: exit status 80 (210.675533ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-786745 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-786745-m03 already exists in multinode-786745-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-786745-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.65s)

                                                
                                    
x
+
TestScheduledStopUnix (114.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-553504 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-553504 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.548687785s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-553504 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-553504 -n scheduled-stop-553504
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-553504 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-553504 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-553504 -n scheduled-stop-553504
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-553504
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-553504 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-553504
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-553504: exit status 7 (65.268092ms)

                                                
                                                
-- stdout --
	scheduled-stop-553504
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-553504 -n scheduled-stop-553504
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-553504 -n scheduled-stop-553504: exit status 7 (64.799366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-553504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-553504
--- PASS: TestScheduledStopUnix (114.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (218.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1554736100 start -p running-upgrade-614412 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0729 13:17:18.313586  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1554736100 start -p running-upgrade-614412 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m56.151368534s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-614412 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0729 13:19:27.880514  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-614412 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m36.78914408s)
helpers_test.go:175: Cleaning up "running-upgrade-614412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-614412
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-614412: (1.205201846s)
--- PASS: TestRunningBinaryUpgrade (218.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-225538 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-225538 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.233874ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-225538] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-225538 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-225538 --driver=kvm2  --container-runtime=crio: (1m34.739755305s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-225538 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-507612 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-507612 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (111.264857ms)

                                                
                                                
-- stdout --
	* [false-507612] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:17:01.720682  278138 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:17:01.721775  278138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:17:01.721790  278138 out.go:304] Setting ErrFile to fd 2...
	I0729 13:17:01.721794  278138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:17:01.722152  278138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19341-233093/.minikube/bin
	I0729 13:17:01.722835  278138 out.go:298] Setting JSON to false
	I0729 13:17:01.723791  278138 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10765,"bootTime":1722248257,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:17:01.723878  278138 start.go:139] virtualization: kvm guest
	I0729 13:17:01.726077  278138 out.go:177] * [false-507612] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:17:01.727407  278138 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 13:17:01.727430  278138 notify.go:220] Checking for updates...
	I0729 13:17:01.729592  278138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:17:01.730933  278138 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19341-233093/kubeconfig
	I0729 13:17:01.732452  278138 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19341-233093/.minikube
	I0729 13:17:01.733805  278138 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:17:01.735147  278138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:17:01.736829  278138 config.go:182] Loaded profile config "NoKubernetes-225538": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:17:01.736960  278138 config.go:182] Loaded profile config "force-systemd-env-265470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:17:01.737045  278138 config.go:182] Loaded profile config "offline-crio-201075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:17:01.737122  278138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:17:01.774082  278138 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:17:01.775555  278138 start.go:297] selected driver: kvm2
	I0729 13:17:01.775577  278138 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:17:01.775603  278138 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:17:01.777822  278138 out.go:177] 
	W0729 13:17:01.779163  278138 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0729 13:17:01.780577  278138 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-507612 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-507612" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-507612

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507612"

                                                
                                                
----------------------- debugLogs end: false-507612 [took: 2.64680029s] --------------------------------
helpers_test.go:175: Cleaning up "false-507612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-507612
--- PASS: TestNetworkPlugins/group/false (2.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (160.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2966738321 start -p stopped-upgrade-938122 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2966738321 start -p stopped-upgrade-938122 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m46.36961888s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2966738321 -p stopped-upgrade-938122 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2966738321 -p stopped-upgrade-938122 stop: (1.461145669s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-938122 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-938122 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.547297966s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (160.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (66.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-225538 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-225538 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m4.943273698s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-225538 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-225538 status -o json: exit status 2 (247.588301ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-225538","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-225538
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-225538: (1.051816979s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (66.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-225538 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-225538 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.815622621s)
--- PASS: TestNoKubernetes/serial/Start (30.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-225538 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-225538 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.094822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.501000635s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.835558552s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-225538
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-225538: (1.322363051s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-225538 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-225538 --driver=kvm2  --container-runtime=crio: (21.815179129s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.82s)

                                                
                                    
x
+
TestPause/serial/Start (74.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-220574 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-220574 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m14.445291919s)
--- PASS: TestPause/serial/Start (74.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-938122
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-225538 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-225538 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.494159ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m40.140345542s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (118.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0729 13:23:41.366074  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m58.248380281s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (118.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-507612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-507612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2lfs4" [738185bd-0d12-4544-9c9c-6bda4290a469] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2lfs4" [738185bd-0d12-4544-9c9c-6bda4290a469] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004436732s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-507612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m30.501167005s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (102.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m42.591265137s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (102.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7465z" [6248d596-70c1-489a-b757-c447f9662be9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004292071s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-507612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-507612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-wqj9r" [a7522df7-211d-4d5b-ab88-b288f35d604e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-wqj9r" [a7522df7-211d-4d5b-ab88-b288f35d604e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004514848s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-507612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (95.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m35.121740094s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (95.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (115.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m55.177274051s)
--- PASS: TestNetworkPlugins/group/flannel/Start (115.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ch2fq" [aceecfec-3fce-4baa-a388-00f5ac99eaf9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006033268s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-507612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-507612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8lvph" [105176bc-401a-4c48-8027-20d1c4ae051a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-8lvph" [105176bc-401a-4c48-8027-20d1c4ae051a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004102608s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-507612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-507612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-507612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lpb9k" [8f4b2911-d166-46d6-910c-5d85d42d0614] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0729 13:27:18.312976  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/addons-631322/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-lpb9k" [8f4b2911-d166-46d6-910c-5d85d42d0614] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.006289904s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-507612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-507612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bw7pl" [bd7be001-7c3e-4feb-8791-cc8630f4cae2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bw7pl" [bd7be001-7c3e-4feb-8791-cc8630f4cae2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005145688s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-507612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-507612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m14.572381702s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-507612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (119.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-566777 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-566777 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m59.077080162s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (119.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9wgtn" [4764ccfc-01c9-432a-b270-bf21270f1552] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005426542s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-507612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-507612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bwgvq" [a74e3774-db33-4180-8ea5-cb17e6996c47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bwgvq" [a74e3774-db33-4180-8ea5-cb17e6996c47] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004053158s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-507612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-135920 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-135920 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m12.393730004s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-507612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-507612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tf4rh" [97047dad-b2fb-4c5d-8eb2-3ef926b6e6e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tf4rh" [97047dad-b2fb-4c5d-8eb2-3ef926b6e6e0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00480585s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-507612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-507612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0729 13:58:05.412088  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-972693 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 13:29:27.881339  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-972693 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m14.050129563s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-135920 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9da5631b-2e6f-49af-a4d1-47b2bc69778b] Pending
helpers_test.go:344: "busybox" [9da5631b-2e6f-49af-a4d1-47b2bc69778b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9da5631b-2e6f-49af-a4d1-47b2bc69778b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003580917s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-135920 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-566777 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [913d9c33-01b3-4966-bbfb-61a75f958c12] Pending
helpers_test.go:344: "busybox" [913d9c33-01b3-4966-bbfb-61a75f958c12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [913d9c33-01b3-4966-bbfb-61a75f958c12] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005664569s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-566777 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-135920 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-135920 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.01854914s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-135920 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-566777 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0729 13:30:06.009500  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
E0729 13:30:06.014785  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/auto-507612/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-566777 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.002872981s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-566777 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-972693 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7baf4003-8228-4f6d-98e6-f17703c2453c] Pending
helpers_test.go:344: "busybox" [7baf4003-8228-4f6d-98e6-f17703c2453c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7baf4003-8228-4f6d-98e6-f17703c2453c] Running
E0729 13:30:37.633409  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:37.638672  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:37.648940  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:37.669264  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:37.709573  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:37.789920  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:37.950432  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:38.271011  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:30:38.911221  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.003457789s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-972693 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-972693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-972693 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (636.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-135920 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 13:32:37.613643  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:32:37.883886  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-135920 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m36.232349054s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-135920 -n embed-certs-135920
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (636.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (591.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-566777 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 13:32:39.231387  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-566777 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (9m51.051678745s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-566777 -n no-preload-566777
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (591.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (603.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-972693 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 13:33:15.653016  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:18.574251  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/calico-507612/client.crt: no such file or directory
E0729 13:33:21.475582  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/kindnet-507612/client.crt: no such file or directory
E0729 13:33:25.893230  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:39.325161  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/custom-flannel-507612/client.crt: no such file or directory
E0729 13:33:46.373731  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/flannel-507612/client.crt: no such file or directory
E0729 13:33:46.439978  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:46.445280  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:46.455559  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:46.475827  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:46.516191  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:46.596523  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:46.757035  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:47.077767  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:47.718141  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:48.998980  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:50.913501  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/enable-default-cni-507612/client.crt: no such file or directory
E0729 13:33:51.559263  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:33:56.680052  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
E0729 13:34:06.920863  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/bridge-507612/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-972693 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m3.413923101s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-972693 -n default-k8s-diff-port-972693
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (603.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-924039 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-924039 --alsologtostderr -v=3: (2.288684851s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-924039 -n old-k8s-version-924039: exit status 7 (63.10263ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-924039 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-615666 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-615666 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (49.092184529s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-615666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-615666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.144203162s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-615666 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-615666 --alsologtostderr -v=3: (6.888037085s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-615666 -n newest-cni-615666
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-615666 -n newest-cni-615666: exit status 7 (67.810318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-615666 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-615666 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-615666 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (35.885214005s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-615666 -n newest-cni-615666
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-615666 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-615666 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-615666 -n newest-cni-615666
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-615666 -n newest-cni-615666: exit status 2 (225.994779ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-615666 -n newest-cni-615666
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-615666 -n newest-cni-615666: exit status 2 (227.483945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-615666 --alsologtostderr -v=1
E0729 13:59:27.880629  240340 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19341-233093/.minikube/profiles/functional-311529/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-615666 -n newest-cni-615666
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-615666 -n newest-cni-615666
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
144 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 2.86
272 TestNetworkPlugins/group/cilium 3.11
286 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-507612 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-507612" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-507612

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507612"

                                                
                                                
----------------------- debugLogs end: kubenet-507612 [took: 2.715532304s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-507612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-507612
--- SKIP: TestNetworkPlugins/group/kubenet (2.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-507612 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-507612" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-507612

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-507612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507612"

                                                
                                                
----------------------- debugLogs end: cilium-507612 [took: 2.975517888s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-507612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-507612
--- SKIP: TestNetworkPlugins/group/cilium (3.11s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-312895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-312895
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard